This chapter describes the utilities available to assist you in administering the system hardware, which consists of the CPUs and all associated devices The utilities work on single systems and on systems joined into clusters. Hardware management involves viewing the status of system devices and performing administrative options on them if necessary. This includes adding and removing devices, troubleshooting any devices that are not working, and monitoring devices to prevent problems before they occur.
You may also need to administer the software that is associated with devices, such as drivers, kernel pseudodevices and device special files. This software enables devices to communicate and transfer data between system components. Information on administering the related software components is included in this chapter.
Most operations require root user privileges, however you can assign
such privileges to nonroot users using the SysMan division of privileges
(DOP) feature.
See the
dop
(8)
reference page for more information.
This chapter contains the following sections:
Section 5.1 provides a conceptual overview of hardware management and relates it to the organization of information in this chapter.
Section 5.2 lists other documentation resources that apply to hardware management, including reference pages for commands and utilities. It also identifies key system files and provides pointers to utilities that are associated with hardware administration.
Section 5.3 describes the SysMan hardware management options.
Section 5.4
describes
the hardware manager command-line utility
hwmgr
.
This utility
provides full access to hardware management options.
Section 5.5
describes
how to use
dsfmgr
to manage device special files.
Section 5.6 describes how to manually add devices that cannot be added using hardware manager, and how you create pseudodevices.
Section 5.7 describes targeted utilities for hardware device management.
A hardware device can be any part, or component, of a system. The system is organized in a hierarchy with the CPUs at the top, and discrete devices such as disks and tapes at the bottom. This is sometimes also referred to as the system topology. The following components are typical of the device hierarchy of most computer systems although it is not a definitive list:
The central processing unit (CPU), which may be a single processor system, a multiprocessor system, or a set of processors joined into a cluster. The system is sometimes referred to as a host in the context of hardware management and has a designated host name and perhaps also a host address if the system is on a network. You will often specify commands using the host name. The CPUs are the top of the system hardware hierarchy, and all other system devices are organized under it in the hardware hierarchy.
Typical administrative tasks associated with the CPU are many, such as bringing CPUs online, starting and stopping them, or sharing CPU resources. These tasks are documented throughout this guide, such as Chapter 2, which documents the options for shutting down the system.
Buses - A system may have a number of main internal communication buses, which transfer data between all devices on the system. Adapters and controllers are physically plugged into buses and have both physical and logical addresses.
Buses may have special software associated with the physical bus, but that software is usually managed within the context of the UNIX operating system. For example, when adding an option card such as a sound or network card to a PCI bus, you have to shut down the system add the hardware and reboot. Such devices are often automatically recognized and added to the system configuration on reboot, but you may need to run a firmware utility to install a driver for the device. Always consult your system documentation and the documentation that comes with the card for information on adding such devices.
Controllers and Adapters -
A system may have a number of controllers such as SCSI controllers, which
control one or more devices.
There may be other controllers, such as the floppy
disk interface (fdi
) which support only one kind of device
and often only one physical disk attached to the controller.
A network adapter
may be connected to a bus, but will not have any other devices attached to
it other than the network.
Adapters occupy a physical slot on a bus, which gives them both a logical address and a physical location to administer. They may also provide slots for devices, which also have physical and logical addresses.
Devices are usually the lowest entities in the system hierarchy, such as SCSI disks, CDROM readers, and tapes. They are typically attached to a controller or adapter, and often have both a physical location and a logical address to administer.
Devices can also be shared between hosts and between other system components. This means that a device may have different names and identifiers associated with it. Understanding how to identify a device and how that device appears to the rest of the hierarchy is an important aspect of hardware management and you often need to know the logical and physical locations of devices.
When referring to SCSI devices in this chapter, the SCSI disk is the device that is most frequently used as an example. Typically, it is the device that is most often the object of management tasks and may appear to the system as a single device, or as a group:
RZ
devices
Small Computer System Interface (SCSI) technology is an interface
standard to which disks must conform if they are to be supported on the operating
system.
Note that not all SCSI devices closely conform to this standard and
may not be automatically detected and added during a boot, or when using
hwmgr
to add a device dynamically.
You may need to use
ddr_config
to add such devices as described in
Section 5.6.
HSG
and
HSZ
devicesThe Redundant Array of Inexpensive Disks (RAID) technology. These are storage boxes that contain several connected SCSI disks, appearing to the system as a single device. They may support features such as hot-swapping.
Refer to the
RAID
(7),
SCSI
(7)
and
rz
(7)
reference
pages for more information on device characteristics.
Refer to the
tz
(7)
reference
page for more information on tape devices.
Refer to the
Technical Overview
and the
Software Product Description
for the current
supported standards for RAID and SCSI.
Hardware management involves understanding how all the components relate to each other, how they are logically and physically located in the system topology, and how the system software recognizes and communicates with components. To better understand the component hierarchy of a system, refer to Chapter 1 for an introduction to the SysMan Station. This is a graphical utility that displays topological views of the system component hierarchy and allows you to manipulate such views.
Fortunately, the vast majority of hardware management is automated. When you add a device such as a SCSI disk to a system and reboot the system, it will find the device and recognize it, building in any device drivers that it needs. The system will automatically create the software components for that disk as device special files. It only remains for the administrator to partition the disk as needed and create a file system on the partitions (described in Chapter 6) before it can be used to store data. However, you will periodically need to perform some tasks manually, such as when a disk crashes and you need to bring a duplicate device online at the same logical location. You may also need to manually add devices to a running system or redirect the I/O for one disk to another disk. This chapter focuses on these manual tasks.
Many other hardware management tasks are part of regular system operations and maintenance, such as repartitioning a disk or adding an adapter to a bus. Often, such tasks are fully described in the hardware documentation that accompanies the device itself, but you will often need to perform tasks such as checking the system for the optimum (or preferred) physical and logical locations for the new device.
Another important aspect of hardware management is preventative maintenance and monitoring. You should be aware of the following operating system features that can facilitate a healthy system environment:
The Event Manager, (EVM) - A utility for filtering and displaying all system events and then presenting those events to the administrator. It includes sophisticated features for warning you of problems by electronic mail or a pager. Refer to Chapter 13 for information on configuring EVM.
The SysMan Station - A graphical utility that enables you to view and monitor the entire system (or cluster) hardware and launch applications to perform administrative tasks on any device. These applications can also be launched from the SysMan Menu, and some example applications are described later in this chapter (see Section 5.3). For information on using the SysMan utilities, refer to Chapter 1.
The system
census utility,
sys_check
- This utility provides
you with data on your system's current configuration as an HTML document that
you can read with a Web browser.
You can use the data as a system baseline,
perform tuning tasks, and check all log files.
The Storage configuration section
provides information on storage devices and file systems.
Refer to
Chapter 3
for information on running this utility, and on configuring it to run regularly.
Insight Manager - An enterprise-wide,
Web-based management tool that enables you to view system and component status
anywhere in your local area network.
It includes launch points for the SysMan Station,
the SysMan Menu and the system census utility,
sys_check
.
Refer to
Chapter 1
for information on configuring and using
Insight Manager.
The organization of this chapter reflects the hardware and software components that you manage as follows:
Generic hardware management utilities - These utilities enable you to perform operations on all devices of a type, classes of devices such as SCSI tapes, or individual devices. The utilities may in some cases operate on all systems in a cluster. An example of such a utility is the SysMan Station, which provides you with a graphic display of the entire component hierarchy for all members of a cluster.
Software management - This involves the administration of the software that is associated with hardware components on the system, principally managing the device special files. These are the files associated with a hardware device that enable any application to access the device driver or pseudo-driver for that device.
Targeted hardware management utilities - These utilities
enable you to perform operations that are targeted to a specific device and
perform a specific task.
An example is the disk configuration command line
interface,
disklabel
and the analogous graphical interface,
Disk Configuration (diskconfig
).
which enable you to partition
a disk using the standard layouts or your own custom layouts.
Another way to think of this is that with a generic utility
you can perform a task on many devices, while with a targeted utility you
can only perform a task on a single device.
Note that unless stated, most
operations can be performed on a single system or a cluster.
You should refer
to the TruCluster documentation for additional information on managing
cluster hardware.
5.1.1 Logical Storage Manager
The Logical Storage Manager (LSM) consists of physical disk devices, logical entities, and the mappings that connect both. LSM builds virtual disks, called volumes, on top of UNIX system disks. A volume is a special device that contains data managed by a UNIX file system, a database, or other application. LSM transparently places a volume between a physical disk and an application, which then operates on the volume rather than on the physical disk. A file system, for instance, is created on the LSM volume rather than a physical disk.
The LSM software maps the logical configuration of the system to the physical disk configuration. This is done transparently to the file systems, databases, and applications above it because LSM supports the standard block device and character device interfaces to store and retrieve data on LSM volumes. Thus, you do not have to change applications to access data on LSM volumes.
Refer to the manual
Logical Storage Manager
for more complete information on
LSM concepts and commands.
5.2 Reference Information
The following sections contain reference information related to documentation,
system files and other utilities.
Some utilities described here are obsolete
and will be removed in a future release.
Consult the
Release Notes
for a list
of utilities that are scheduled for retirement.
If you are using one of these
utilities, you should migrate to its replacement as soon as possible.
Check
your site-specific shell scripts for any calls that may invoke an obsolete
utility.
5.2.1 Related Documentation
The following documentation contains information hardware management:
Books
Device documentation - Consult the device documentation (Owners Manual or User Guide) for information on installing the device and for any required operating system settings. The device documentation will provide information that you may need, such as driver files and configuration settings.
Network Administration - Provides information on configuring or connecting network devices.
Device Driver Documentation Kit - Contains related documents such as: Writing PCI Bus Device Drivers and Writing Device Drivers: Reference.
Reference pages
hwmgr
(8)
- Contains complete information on the
command syntax for the hardware manager utility,
/sbin/hwmgr
.
dsfmgr
(8)
- Contains complete information on the
command syntax for the device special file management utility, used to create
device special files in the
/dev
directory.
Refer also
to
Section 5.5.
mknod
(8),
MAKEDEV
(8),
scsimgr
(8),
scu
(8),
ddr_config
(8),
and
devswmgr
(8)
Note that most command line and graphical utilities also provide
extensive online help.
5.2.2 Identifying Hardware Management System Files
The following system files contain static or dynamic information that is used to configure the device into the kernel. You should not edit these files manually even if they are ASCII text files. Some files may be Context Dependent Symbolic Links, as described in Chapter 6. If the links are accidentally broken, the files may not be usable in a clustered environment until the links are re-created.
The
/dev
directory contains device special
files.
Refer to
Section 5.5
for more information.
/etc/ddr_dbase
- The DDR (device
dynamic recognition) device information database.
The contents of this file
is compiled into the binary file/etc/ddr.db
, which is used
by the system to obtain device information.
/etc/dec_devsw_db
- This is a binary
database owned by the kernel
dev
switch code.
This database
keeps track of the driver major numbers and driver switch entries.
/etc/disktab
- This file specifies
the disk geometry and partition layout tables.
This file is useful for identifying
disk device names and certain disk device attributes.
/etc/dvrdevtab
- This file specifies
the database name and the mapping of driver names to special file handlers.
/etc/gen_databases
- A text file
that contains the information required to convert a database name to a database
file location and a database handler.
/etc/dec_hw_db
- This is a binary
database that contains hardware persistence information.
Generally, this refers
to hardware such as buses or controllers.
/etc/dec_hwc_ldb
- This is a binary
database that contains information on hardware components that are local to
a cluster member.
/etc/dec_hwc_cdb
- This is a binary
database that contains information on hardware components that are shared
by all members of a cluster.
Hardware components with unique cluster names
or mapped
dev_t
are stored in this database.
/etc/dec_scsi_db
- This is a binary
database owned by SCSI/CAM.
It stores the world-wide identifier (WWID) of
SCSI devices and enables CAM to track all SCSI devices that are known to the
system.
/etc/dec_unid_db
- This is a binary
database that stores the preceding highest hardware identifier (HWID) assigned
to a hardware component.
This database is used to generate the next HWID to
be assigned to a newly-installed hardware component.
5.2.3 WWIDs and Shared Devices
SCSI device naming is based on the logical identifier (ID) of a device. This means that the device special filename has no correlation to the physical location of a SCSI device. UNIX uses information from the device to create an identifier called a world-wide identifier, which is usually written as WWID.
Ideally, the WWID for a device is unique, enabling the identification of every SCSI device attached to the system. However, some legacy devices (and even some new devices available from third-party vendors) do not provide the information required to create a unique WWID for a specific device. For such devices, the operating system will attempt to generate a WWID, and in the extreme case will use the device nexus (the SCSI bus/target/lun) to create a WWID for the device.
Consequently, devices that do not have a unique WWID should not be used on a shared bus. If a device that does not have a unique WWID is put on a shared bus, a different device special file will be created for each different path to the device. This can lead to data corruption if two different device special files are used to access the device at the same time. To determine if a device has a cluster unique WWID, use the following command:
#
hwmgr -show components
If
a device has the
c
flag set in the
FLAGS
field, then it has a cluster-unique WWID and can be placed on a shared bus.
Such devices are cluster-shareable because they can be put on a shared bus
within a cluster.
Note
An exception to this rule are HSZ devices. Although an HSZ device might be marked as cluster shareable some firmware revisions on the HSZ preclude multi-initiators from probing the device at the same time. Refer to the owners manual for the HSZ device and check the Release Notes for any current restrictions.
The following example displays all the hardware components of category disk that have cluster-unique WWIDs:
#
hwmgr -show comp -cat disk -cs
HWID: HOSTNAME FLAGS SERVICE COMPONENT NAME ----------------------------------------------- 35: pmoba rcd-- iomap SCSI-WWID:0410004c:"DEC RZ28 ..." 36: pmoba -cd-- iomap SCSI-WWID:04100024:"DEC RZ25F ..." 42: pmoba rcd-- iomap SCSI-WWID:0410004c:"DEC RZ26L ..." 43: pmoba rcds- iomap SCSI-WWID:0410003a:"DEC RZ26L ..." 48: pmoba rcd-- iomap SCSI-WWID:0c000008:0000-00ff-fe00-0000 49: pmoba rcd-- iomap SCSI-WWID:04100020:"DEC RZ29B ..." 50: pmoba rcd-- iomap SCSI-WWID:04100026:"DEC RZ26N ..."
In some rare cases you may have a device that does not supply a unique
WWID but you have a requirement that it must be available on a shared bus.
Using such devices on a shared bus is not recommended but there is a manual
command that will allow you to set up one of these devices to be used on a
shared bus.
See
Section 5.4.4.10
for a description of how
to use the
hwmgr -edit scsi
command option.
5.2.4 Related Utilities
The following utilities are also available for use in managing devices:
The system exerciser utilities enable you to
test devices for correct operation.
See the
diskx
(8),
tapex
(8),
cmx
(8),
fsx
(8),
and
memx
(8)
reference pages.
See also
Chapter 12.
The
scu
utility can be used to maintain and
diagnose problems with SCSI peripherals and the CAM I/O subsystem.
Refer to
the
scu
(8)
reference page and the online help for the command.
The
sysconfig
command
is used to query or modify the kernel subsystem configuration.
You use this
command to add subsystems to your running kernel, reconfigure subsystems
already in the kernel, ask for information about (query) subsystems in the
kernel, and unconfigure and remove subsystems from the kernel.
You can use
sysconfig
to set some device attribute values.
For information on
using
sysconfig
, refer to
Chapter 4
which also documents the Kernel Tuner,
dxkerneltuner
, a
graphical utility that you can use to modify attribute values.
CDE Application Manager - SysMan Applications pop-up and System_Admin folders contain several hardware management tools, for example:
Configuration - Graphical utilities used to configure hardware such as ATM, Disk devices, Network devices, PPP (modem) devices, and LAT devices.
DailyAdmin - A graphical utility for power management, which can be used to set power attributes for certain devices.
SysMan Checklist, SysMan Menu, and SysMan Station, provide interfaces to configure, monitor, and maintain system devices. The SysMan Menu and SysMan Station can be run from a variety of platforms, such as a personal computer or an X11-based environment. This enables you to perform remote monitoring and management of devices. Refer to Chapter 1 for information.
5.3 Using the SysMan Hardware Utilities
The SysMan Menu
Hardware
branch provides utilities for hardware management.
You
can also use the SysMan Station to obtain information about hardware devices
and to launch hardware management utilities.
The SysMan utilities provide you with a subset of the many more
hardware management features available from the command line when you use
the
hwmgr
command.
A more detailed discussion of the
hwmgr
command and its options can be found in
Section 5.4.
See also the
hwmgr
(8)
reference page for a complete listing of the
command syntax and options.
Selecting the help option in one of the SysMan Menu
hardware tasks will invoke the appropriate reference pages.
When you invoke the SysMan Menu as described in
Chapter 1,
hardware management options are available under the
Hardware
branch of the menu.
Expanding this branch displays the following tasks:
View hardware hierarchy
View cluster
View device information
View central processing unit (CPU) information
These tasks launch SysMan Menu utilities that are described in the
following sections.
The first three utilities run instances of the
/sbin/hwmgr
command to obtain and display system data.
Note that
the utilities provide a method of finding the data that you use when specifying
hardware management operations on system components.
For example, finding
out which disks are on which SCSI buses.
The following option buttons (or choices, in a terminal) are available in all the utilities:
Rerun - Runs the utility again, updating the display.
Stop - Stops the utility. Use the Rerun option to update the display or choose OK to exit the utility.
OK - Ends the task and closes the window.
Help - Displays the reference page.
5.3.1 Viewing the Hardware Hierarchy
The
View hardware hierarchy
task invokes the command
/sbin/hwmgr -view hierarchy
, directing the output to the SysMan Menu
window (or screen, if a terminal).
The following example shows output from
a single-CPU system that is not part of a cluster:
View hardware hierarchy HWID: hardware component hierarchy --------------------------------------------------- 1: platform AlphaServer 800 5/500 2: cpu CPU0 4: bus pci0 5: connection pci0slot5 13: scsi_adapter isp0 14: scsi_bus scsi0 30: disk bus-0-targ-0-lun-0 dsk0 31: disk bus-0-targ-4-lun-0 cdrom0 7: connection pci0slot6 15: graphics_controller trio0 9: connection pci0slot7 16: bus eisa0 17: connection eisa0slot9 18: serial_port tty00 19: connection eisa0slot10 20: serial_port tty01 21: connection eisa0slot11 22: parallel_port lp0 23: connection eisa0slot12 24: keyboard PCXAL 25: pointer PCXAS 26: connection eisa0slot13 27: fdi_controller fdi0 28: disk fdi0-unit-0 floppy0 11: connection pci0slot11 29: network tu0
Use this task to display the hardware hierarchy for the entire system or cluster. The hierarchy shows every bus, controller, and device on the system from the CPUs down to the individual peripheral devices such as disks and tapes. On a system or cluster that has many devices, the output can be lengthy and you may need to scroll the display to see devices at the beginning of the output.
The output is useful because it provides you with information that is
used in many
hwmgr
command options to perform hardware
management operations such as viewing more device detail and adding or deleting
devices.
The following items shown in the hierarchy can be used as command
input:
HWID - The hardware identifier (or
id
),
an integer that is unique to every individual entry in the hierarchy.
The device name, such as
pci
for the Peripheral
Component Interconnect (PCI) bus.
The device basename, a mnemonic followed by an integer that
identifies the device such
cdrom0
, which relates to the
device special file for the device (/dev/disk/cdrom0
).
More information on device special file names can be found in
Section 5.5.
The physical location attribute specifies the address or path
to a device, such as
bus-0-targ-0-lun-0
, sometimes written
as
0/0/0
, which provides the following information:
scsi-0
is the
bus
and
provides number of the bus to which the device is attached.
targ-0
is the
target
number for this device on the bus, in this case the first target on bus 0.
lun-0
is the logical unit number or
lun
, in this case the first logical unit number at target 0 on bus
0.
The hardware category of a device, such as a
bus
or
ide_controller
.
Connections to slots, which show the slot number for a device,
such as
pci0slot5
and
eisa0slot9
.
Bus, controller, and device relationships, such as the following
section showing two disk devices on controller
scsi_adapter isp0
which is on the bus
scsi_bus scsi0
:
13: scsi_adapter isp0 14: scsi_bus scsi0 30: disk bus-0-targ-0-lun-0 dsk0 31: disk bus-0-targ-4-lun-0 cdrom0
Note that because the same device might be shared (for example, on a shared bus) it may appear in the hierarchy more than once and will have a unique identifier each time it appears. An example of this is given in Section 5.4.4.7.
You can use the information from the
-view hierarchy
command output in other
hwmgr
commands when you want to
focus an operation on a specific hardware component, as shown in the following
command, which gets the value of a device attribute named
device_starvation_time
for the device with the HWID (id
) of 30.
Device
30 is the disk device at bus 0, target 0 and lun 0 in the example hierarchy:
#
/sbin/hwmgr -get attr -id 30 -a device_starvation_time
30:
device_starvation_time = 25 (settable)
The
output shows that the value of the
device_starvation_time
attribute is 25.
The label
(settable)
indicates that this
is a configurable attribute that you can set using the following command option:
#
/sbin/hwmgr -set attr
Selecting the
View cluster
task invokes the command
/sbin/hwmgr -view cluster
, directing the output to the SysMan Menu
window (or screen, if a terminal) as follows:
View cluster Starting /sbin/hwmgr -view cluster ... /sbin/hwmgr -view cluster run at Fri May 21 13:42:37 EDT 1999 Member ID State Member HostName --------- ----- --------------- 1 UP rene (localhost) 31 UP witt 10 UP rogr
If you attempt to run this command on a system that is not a member of a cluster, the following message is displayed instead of the system listing:
hwmgr: This system is not a member of a cluster.
The
Member ID
and the
HostName
can be specified in
some
hwmgr
commands when you want to focus an operation
on a specific member of a cluster, as shown in the following example:
#
/sbin/hwmgr -scan scsi -member witt
5.3.3 Viewing Device Information
Selecting the
View device information
task invokes
the command
/sbin/hwmgr -view devices
, directing the output
to the SysMan Menu window (or screen, if a terminal).
Use this option to display the device information
for the entire system or cluster.
The output shows every device and pseudodevice
(such as
/dev/kevm
) on the system.
The following example
shows the output from a small single-CPU system that is not part of a cluster:
View device information Starting /sbin/hwmgr -view devices ... /sbin/hwmgr -view devices run at Fri May 21 14:20:08 EDT 1999 HWID: Device Special File Mfg Model Location Name ------------------------------------------------------------------ 3: /dev/kevm 28: /dev/disk/floppy0c 3.5in floppy fdi0-unit-0 30: /dev/disk/dsk0c DEC RZ1DF-CB(C)DEC bus-0-targ-0-lun-0 31: /dev/disk/cdrom0c DEC RRD47 (C)DEC bus-0-targ-4-lun-0
For
the purpose of this command, a
"device"
is considered to be any
entity in the hierarchy that has the attribute
dev_base_name
and as such will have an associated device special file (DSF).
The output
from this utility provides the following information which can be used with
the
hwmgr
command to perform hardware management operations
on the device:
HWID - The hardware identifier (or
id
),
an integer that is unique to every individual entry in the hierarchy,
The DSF Name, such as
/dev/disk/cdrom0c
.
In the case of disk devices, this is the name of the device special file associated
with the
c
partition that maps to the entire capacity of
the disk.
For a tape, it will show the device special file name that maps
to the default density for the device.
See
Section 5.5
for
a description of these names.
The model, which specifies a manufacturer model number or
a generic description such as
3.5in floppy
.
The physical location of a device, such as the SCSI
bus-0-targ-0-lun-0
, sometimes written as 0/0/0, which specifies
the following:
bus-0
- The number of the bus to
which the device is attached, in this case, it is SCSI bus 0.
targ-0
- The target number for this
device on the bus, in this case the first target on the bus.
lun-0
- The logical unit number,
or
lun
), in this case the first on the bus.
The previous output also shows a floppy disk attached to the
floppy disk interface,
fdi
as device 0, unit 0.
You can specify this information to certain
hwmgr
commands to perform hardware management operations on a particular device.
The following example of disk location specifies a device special file for
a disk, causing the light (LED) on that disk to flash for 30 seconds.
This
tells you exactly which device special file is associated with that disk.
#
/sbin/hwmgr -flash light -dsf /dev/disk/dsk0c
Selecting the
View central processing unit (CPU) information
task invokes the command
/usr /sbin/psrinfo -v
,
directing the output to the SysMan Menu window (or screen, if a terminal).
Use this option to display the CPU status information, as shown in the following
sample output for a single-processor system.
The output from this utility describes the processor and tells you how long it has been running, as follows:
/usr/sbin/psrinfo Starting /usr/sbin/psrinfo -v ... /usr/sbin/psrinfo -v run at Fri May 21 14:22:05 EDT 1999 Status of processor 0 as of: 05/21/99 14:22:05 Processor has been on-line since 05/15/1999 14:42:28 The alpha EV5.6 (21164A) processor operates at 500 MHz, and has an alpha internal floating point processor.
5.3.5 Using the SysMan Station
The SysMan Station is a graphical utility that runs under various windowing environments or from a web browser. Refer to Chapter 1 and the online help for information on launching and using the SysMan Station.
Features of the SysMan Station that assist you in hardware management are as follows:
The SysMan Station provides a live view of system and component
status.
You can customize views to focus on parts of a system or cluster that
are of most interest to you.
You will be notified when a hardware problem
occurs on the system.
System views are hierarchical, showing the complete
system topology from CPUs down to discrete devices such as tapes.
You can
observe the layout of buses, controllers, and adapters and see their logical
addresses.
You can see what devices are attached to each bus or controller,
and their slot numbers.
Such information is useful for running
hwmgr
commands from the command prompt.
You can select a device and view
detailed attributes of that device.
For example, if you select a SCSI disk
device and press the right mouse button, a menu of options is displayed.
You
can choose to view the device properties for the selected disk.
If you opt
to do this, an extensive table of device properties will be displayed.
This
action is the same as using the
hwmgr
command, as shown
in the following (truncated) sample output:
#
hwmgr -get attr -id 30
30: name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2 category = disk sub_category = generic architecture = SCSI phys_location = bus-0-targ-0-lun-0 dev_base_name = dsk0 access = 7 capacity = 17773524 block_size = 512 open_part_mask = 59 disk_part_minor_mask = 4294967232 disk_arch_minor_mask = 4290774015 <display truncated>
When you select a device, you can also choose to launch a utility and perform configuration or daily administrative operations on the selected device. For example, if you select a network adapter, you can configure its settings or perform related tasks such as configure the domain name server (DNS). You can launch the Event Viewer to see if any system events (such as errors) pertaining to this device have been recently posted.
Note that you can also run the SysMan Station from within
Insight Manager and use it from a PC, enabling you to remotely manage system
hardware.
Refer to
Chapter 1
for more information on remote
management options.
5.4 Using hwmgr to Manage Hardware
The principal generic utility used for managing hardware
is the
hwmgr
command line interface (CLI).
Other utilities,
such as the SysMan utilities only provide a limited subset of the
features provided by
hwmgr
.
For example, you can use
hwmgr
to set an attribute for all devices of a particular type (such
as SCSI disks) on all SCSI adapters in all members of a cluster.
Most hardware management is performed automatically by the system and you need only intervene under certain circumstances, such as replacing a failed device so that the replacement device takes on the identity of the failed device. The following sections provide information on:
Understanding the hardware management model
Understanding the principal user options available in
hwmgr
Performing administrative tasks using
hwmgr
5.4.1 Understanding the Hardware Management Model
Within the operating system kernel, hardware data is organized as a
hardware set managed by the kernel set manager (KSM).
Application requests
are passed by library routines to KSM kernel code, or remote code.
The latter
deals with requests to and from other systems.
The hardware component module
(HWC) resides in the kernel, and contains all the registration routines to
create and maintain hardware components in the hardware set.
It also contains
the device nodes for device special file management, which is performed using
dsfmgr
.
The hardware set consists of data structures that describe all of the
hardware components that are part of the system.
A hardware component (or
device) becomes part of the hardware set when registered by its driver.
Devices
have various attributes that describe their function and content.
Each attribute
is assigned a value.
You can read or manipulate these attribute values using
hwmgr
.
Hardware management using the
hwmgr
utility
is organized into three parts, referred to as subsystems by the
hwmgr
utility.
The subsystems are identified as
component
,
scsi
and
name
.
The subsystems
are related to the system hardware databases as follows:
The
component
subsystem references all
hardware devices specified in the (binary)
/etc/dec_hwc_ldb
and
/etc/dec_hwc_cdb
databases.
This includes most devices
on a system.
The
name
subsystem references all hardware
components in the binary
/etc/dec_hw_db
database, often
referred to as the hardware topology.
The database contains hardware persistence
information, maintained by the kernel driver framework and includes data for
buses, controllers and devices.
The
scsi
subsystem references all SCSI
devices in the binary
/etc/dec_scsi_db
database.
The SCSI
database contains entries for all devices managed by the SCSI/CAM architecture.
The specific features of
hwmgr
are as follows:
It is specific to the hardware management subsystem in the kernel, and uses only the KSM hardware set and functions provided by the enhanced management subsystem in the kernel.
It provides a wide range of hardware management features, managing many hardware databases instead of just one or two. (Previous utilities were often focused on a single database.)
It enables you to manage hardware components that are currently unregistered in the KSM hardware set. These may be hardware components seen on a previous system boot, but not currently seen in the active configuration.
It enables you to propagate a management request to multiple members of a cluster.
5.4.2 Understanding hwmgr Command Options
The
hwmgr
utility works with the KSM hardware set
and the kernel hardware management module, providing you with the ability
to manage hardware components.
A hardware component can be a storage peripheral,
such as a disk or tape, or a system component such as a CPU or a bus.
Use
the
hwmgr
utility to manage hardware components on either
a single system or on a cluster.
The
hwmgr
utility provides two types of commands,
internal and generic.
Internal commands do not specify a subsystem identifier
on the command line.
Generic commands are characterized by a subsystem identifier
after the command name.
5.4.2.1 Using Generic Hardware Manager Commands
Generic
hwmgr
commands have the following synopsis:
/sbin/hwmgr
[component | name | scsi
]
[parameter
]
Refer to the
hwmgr
(8)
reference page and use the
-help
command option to obtain information on the command syntax, as shown in the
following example:
#
hwmgr -help component
Note that some
hwmgr
commands are duplicated in more
than one subsystem and not all commands are usable across all subsystems.
You should use the subsystem most closely associated with the type of operation
you want to perform.
The following are examples of commands.
Refer to the
hwmgr
(8)
reference page for a definitive list of commands and for additional examples.
-add
- Use this command to add items
to certain databases.
For example, a hardware persistence entry:
#
/sbin/hwmgr -add name -component_name \ scsi -component_number 1
-delete
- Use this command to delete
information from databases.
For example, the following command will get the
hardware component for the entry being deleted, and pass the request to the
component
subsystem handler to finish the deletion.
Note that if
the entry is not registered in the kernel with HWC (only under unusual circumstances)
the
-delete
option will remove the entry from the CAM database
without calling the
component
subsystem.
#
/sbin/hwmgr -delete name \ -entry scsi1 -member witt
-scan
- Use the
scan
command to check databases for new device information.
For example, the following
command probes the
scsi
subsystem for new hardware:
#
/sbin/hwmgr -scan scsi
-show
- Use the
show
command to display information from databases, such as the hardware components
from the
component
subsystem.
For example, the following
command will display all hardware components, including hardware components
that were previously registered but may not be currently registered:
#
/sbin/hwmgr -show component
5.4.2.2 Using Internal Hardware Manager Commands
Internal
hwmgr
commands have the following typical
synopsis:
/sbin/hwmgr -get attribute
[saved | default| current
]
[-a attribute
]
[-a attribute=value
]
[-a attribute!=value
]
[-id hardware-component-id
]
[-categoryhardware-category
]
[-membercluster-member-name
]
[-cluster
]
The
-get attribute
command option is only
one of many command options available.
Obtain a complete listing of command
options using the following command:
#
/sbin/hwmgr -help
Refer also
to the
hwmgr
(8)
reference page for a complete list of supported command combinations,
and optional flags.
Examples of commands are shown in the following list:
-view
- Use this command option to
display information.
For example, the following command displays the cluster
status:
#
/sbin/hwmgr -view cluster
Member ID State Member HostName --------- ----- --------------- 1 UP rene(localhost) 2 UP witt 3 UP freu 4 UP rogr
The output from this command
provides identifiers that can be used to specify operations in other
hwmgr
command options.
Use the
HostName
whenever
a command option allows you to specify
-memberhostname
Other supported options are:
env
(environment) - Use this option
to display the current values of environment variables such as
HWMGR_DATA_FILE
, the environment variable that you use to set the location of
the main
hwmgr
data file.
transaction
- Use this option to
display information on the most recent hardware management transaction.
devices
- Use this option to display
all devices on the system.
All devices on the local host will be returned
by default, but you can specify parameters to filter the output.
#
/sbin/hwmgr -view devices
HWID: DSF Name Mfg Model Location -------------------------------------------------------------------- 3: /dev/kevm 28: /dev/disk/floppy0c 3.5in floppy fdi0-unit-0 30: /dev/disk/dsk0c DEC RZ1DF-CB (C) DEC bus-0-targ-0-lun-0 31: /dev/disk/cdrom0c DEC RRD47 (C) DEC bus-0-targ-4-lun-0
The
output provides the hardware identifier (HWID) number assigned to the device,
which you use as a parameter in other
hwmgr
commands.
The device special file (DSF) name for the device is identified and can also
be used to specify devices in certain
hwmgr
command options.
The hardware vendor's model number, is specified, as shown on the device or
its casing.
Finally, the physical location of the device is listed, by bus,
target, and logical unit number (lun).
hierarchy
- Use this option to display
the current hardware component hierarchy in the KSM set.
All devices on
the local host will be returned by default, but you can specify parameters
to filter the output.
The output provides the hardware identifier (HWID) for
each device, the device category, such as
disk
or
bus
, and the persistence name, such as
isp0
.
This information can be used to specify operations in other
hwmgr
commands.
- flash light
- Use this option
to flash the display light (LED) on a SCSI disk device for a default time
period of 30 seconds.
Note that this operation may not work on all SCSI devices,
and you may have to open a cover to access the light on some systems, particularly
where the disk is installed in an internal bay.
-get attribute
or
-set attribute
- Use the
-get
and
-set
commands to return or configure (set) attribute values for a device.
You can
specify the device attributes to manipulate, according to their type and one
or more optional matching parameters.
The type of an attribute can be identified
as follows:
saved
- This is the value of the
attribute that has been configured and stored in the database using the
-set saved
command option.
When you set a saved attribute value,
you change its default value and save that value in the database.
That value
will be read in on all subsequent system starts (boots).
default
- This is the usual value
of an attribute that has not been assigned a
saved
or
current
.
When you add a new device and boot the system, all the
device attributes will have their default values.
current
- This is a temporary value
of the attribute, assigned for the current boot session only.
If you set
an attribute using
-set current
, the saved value is unchanged.
When you shut down and reboot the system, the value of the attribute reverts
to the saved value in the database.
If you want the value to persist, you
must use the
-set saved
command option.
When using the
-get
command, the current values are returned by default.
Not all attributes can always have a current or saved value.
Attribute values may be assigned to a device by the system at startup, so
that the saved value shows 0, but the current value may be different.
You
may only be able to set a few attributes for a given device and these attributes
are identified as
(settable)
in the output from the
-get attr
command option.
For each attribute status (saved, default, or current) you can specify
the following optional parameters for
-get
.
If the attribute
can also be
set
, it is noted in the definition.
-a attribute .
.
.
- Use this option
to return values of an individual attribute, such as
path_fail_limit
, which is a SCSI disk attribute defining the limit for path failures.
You must specify at least one attribute for
-set
operations.
For
-get
operations, if you do not specify
at least one attribute, the operation will get all attributes.
-a attribute=value .
.
.
- Use this
option to return attributes that match the specified value.
For example,
to search for devices that support power management, where the saved value
of power management is enabled (1), use the following command:
#
/sbin/hwmgr -get attribute -a power_mgmt_capable=1
When setting attribute values with
-set
,
use this parameter to specify the new value as follows:
#
/sbin/hwmgr -set attribute current \ -a user_name=disk_5_bay_4 -id 18
-a attribute!=value .
.
.
- Use this
option to return attributes that do not match the specified value.
For example:
#
hwmgr -get attribute saved -a power_mgmt_capable!=1
This option is supported for
-get
operations only.
-id hardware-component-id
- Use
this option to return the attribute values for the specified hardware device
identifier (HWID).
For example, the following command returns the current
attribute values for device 18:
#
hwmgr -get attribute current -id 18
This option is supported
for
-get
operations only.
-category hardware-category
- Use
this option to specify a hardware category, such as
disk
or
tape
on which the operation should be performed.
This
option is supported for
-get
operations only.
Note that
you can display all the available categories using the
-get cat
command option.
-member cluster-member-name
- Use
this option to specify the host name of a cluster member on which the operation
should be performed.
This option is supported for both
-get
and
-set
operations.
-cluster
- Use this option to specify
that the operation should be performed cluster-wide.
If this option is
not specified, only data for the local host is returned.
This option is supported
for both
-get
and
-set
operations.
- get category
- Use this option
to return a list of all Hardware Categories available on the system, such
as
platform
,
scsi-bus
, or
disk
.
Use the hardware
category
to specify other
hwmgr
operations, such as in the following example:
#
hwmgr -view devices -cat disk
Section
Section 5.4.4
contains examples
of how you use
hwmgr
to perform administrative tasks.
5.4.3 Configuring the hwmgr Environment
The
hwmgr
utility has some environment settings that you can use to control the amount
of information displayed.
The settings of the environment can be viewed using
the following command, which displays the system default settings:
#
hwmgr -view env
HWMGR_DATA_FILE = "/etc/hwmgr/hwmgr.dat" HWMGR_DEBUG = FALSE HWMGR_HEXINTS = FALSE HWMGR_NOWRAP = FALSE HWMGR_VERBOSE = FALSE
As for other environment variables, you can set the values in login script, or at the command line as shown in the following example:
#
HWMGR_VERBOSE=TRUE
#
export HWMGR_VERBOSE
You usually only need to define the
HWMGR_HEXINTS
HWMGR_NOWRAP
, and the
HWMGR_VERBOSE
values as
follows:
If
HWMGR_HEXINTS
is defined as
TRUE
, any numerical data output from a
hwmgr
command is displayed in hexadecimal numbers.
If
HWMGR_NOWRAP
is defined as
TRUE
, the output from
hwmgr
will be truncated
at 80 characters.
In some cases it can be difficult to read the output from
hwmgr
command options as it wraps off the screen.
Setting
HWMGR_NOWRAP
to
TRUE
makes the output more legible
at the console.
A horizontal ellipsis marks truncated lines as follows:
"..."
If
HWMGR_VERBOSE
is defined as
TRUE
, the output from
hwmgr
contains more detailed
information.
The normal output mode of the
hwmgr
utility
hides any errors that are not critical.
To view more verbose information
on the status of command completion, you can also append the
-verbose
switch to any of the
hwmgr
command options.
For example, if you do a query for a KSM attribute that does not exist
for all hardware components, by default the
hwmgr
utility
will only display the output from hardware components that support the attribute,
as shown in the following example:
#
/sbin/hwmgr -get attr -a type
6: type = local 7: type = local 9: type = MOUSE
Not all hardware components on the system support
the attribute
type
, so there are errors generated by this
command which are suppressed if
HWMGR_VERBOSE
is not defined
as
TRUE
.
To see the errors from hardware components that
do not support this attribute, use the
-verbose
switch
with the command line as follows:
#
hwmgr -get attr -a type -verbose
1: Attribute "type" not defined. 2: Attribute "type" not defined. 4: Attribute "type" not defined. 5: Attribute "type" not defined. 6: current type = local 7: current type = local 8: Attribute "type" not defined. 9: current type = MOUSE 10: Attribute "type" not defined. 11: Attribute "type" not defined. . . (long display, output truncated)
The
-verbose
switch can be used with all
hwmgr
commands, although it
does not always produce additional output.
5.4.4 Using hwmgr to Manage Hardware
The following sections contain examples of tasks that you may need to
perform using
hwmgr
.
Some of these examples may not be
useful for managing a small server with a few peripheral devices.
However,
when managing a large installation with many networked systems or clusters
with hundreds of devices they become very useful.
Using
hwmgr
enables you to connect to an unfamiliar system, obtain information about
its device hierarchy, and then perform administrative tasks without any previous
knowledge about how the system is configured and without consulting system
logs or files to find devices.
5.4.4.1 Locating SCSI Hardware
On systems with many SCSI
peripherals, it can often be difficult to identify a particular device and
associate that device with its logical address or device special file.
The
-flash light
option, which currently only works for some SCSI devices,
enables you to identify a device.
This option has the following syntax:
/sbin/hwmgr -flash light
[-dsfdevice-special-file
]
[-busN
]
[-targetN
]
[-lunN
]
[-secondsnumber
]
[-nopause
]
You might use this command when you are trying to physically
locate a SCSI disk.
For example, a service engineer has arrived and asks
where the system root disk is located.
You know from your
/etc/fstab
file that you are using
/dev/disk/dsk4a
as your
root device, but you do not know where that disk is physically located.
The following command will flash the LED (light emitting diode) light on the
root device for a minute:
#
/sbin/hwmgr -flash light -dsf dsk4 -seconds 60
You can then check the disk bays for the device that is flashing its light.
The LED on the device may be the same LED that is used to indicate normal
disk I/O activity (reads and writes).
If there is much activity on all the
disks, it may not be easy to see which disk is flashing.
In this case, you
can specify the
-nopause
option.
Using
-nopause
will cause the target disk to turn on the LED constantly for the
determined time ( the default is 30 seconds).
This option is also very useful
on SCSI RAID devices where you have more than one disk contained in a RAIDSET
and you want all of the disks to turn on their LEDs.
See also the
-locate component
option.
5.4.4.2 Viewing the System Hierarchy
The
-view
command can be
used to view the hierarchy of hardware within a system.
Use this command
to find what adapters are controlling devices, and discover where adapters
are installed on buses.
The command syntax is as follows:
/sbin/hwmgr -view hierarchy
[-idhardware_component_id
]
[-instanceinstance_number
]
The following example shows typical output on a small system that is not part of a cluster:
#
hwmgr -view hier
HWID: Hardware component hierarchy ---------------------------------- 147: platform DEC 3000 - M400 2: cpu CPU0 148: bus tc0 149: connection tc0slot7 6: serial_port tty00 7: serial_port tty01 150: keyboard LK401 151: pointer VSXXXAA 154: network ln0 153: connection tc0slot6 152: scsi_adapter tcds0 155: connection tc0slot0 156: graphics_controller fb0
Note that some devices may appear as multiple entries in the hierarchy. For example, if a disk is on a SCSI bus that is shared between two adapters, the hierarchy will show two entries for the same device.
You can obtain similar views of the system hardware hierarchy using
the SysMan Station.
5.4.4.3 Viewing System Categories
To perform hardware management options on all devices of the same category,
or to select a particular device in a category, you may need to know what
categories of devices are available.
The hardware manager
-get category
command fetches all the possible values for hardware categories,
and has the following syntax:
/sbin/hwmgr -get category
This command is useful when used in conjunction with the
-get/-set attributes
options, which you can use to display and configure
the attributes (or properties) of a particular device.
Once you know the hardware
categories you can limit your attribute queries to a specific type of hardware.
The command produces output similar to the following:
Hardware Categories ------------------- category = undefined category = platform category = cpu category = pseudo category = bus category = connection category = serial_port category = keyboard category = pointer category = scsi_adapter category = scsi_bus category = network category = graphics_controller category = disk category = tape
Your attribute query can then be focused as follows:
#
hwmgr -get attr -category platform
1: name = DEC 3000 - M400 category = platform
This output informs you that the system platform
has a hardware ID of 1, and that the platform name is DEC 3000 - M400.
See also the
-get attribute
and
-set attribute
options.
5.4.4.4 Obtaining Component Attributes
Any device driver that controls a hardware device will register and maintain the KSM (kernel set manager) attributes for that component. Attributes are characteristics of the device that may simply be information, such as the model number of the device, or they may control some aspect of the behavior of the device, such as the speed at which it operates.
The
-get attribute
command fetches and displays KSM
(kernel set manager) attributes for a component.
The hardware manager utility
is specific to managing hardware and only fetches KSM attributes only from
the hardware set.
All hardware components are identified by KSM using a
unique hardware identifier, otherwise known as the hardware ID or HWID.
The
syntax of the command was given as an example in
Section 5.4.2.2
to show typical
hwmgr
internal command syntax.
The following command will fetch all attributes for all hardware components on the local system and direct the output to a file which you can then search for information:
#
hwmgr -get attr > sysattr.txt
However, if you know which device category you want to query, as was demonstrated in Section 5.4.4.3, you can focus your query on that particular category.
Querying a hardware component category for its attributes can provide
useful information.
For example, you may not be sure if the network is
working for some reason.
You may not even know what type of network adapters
are installed in a system or how they are configured.
Use the
-get
attribute
option to determine the status of network adapters as
shown in the following example:
#
hwmgr -get attr -category network
203: name = ln0 category = network sub_category = Ethernet model = DE422 hardware_rev = firmware_rev = MAC_address = 08-00-2B-3E-08-09 MTU_size = 1500 media_speed = 10 media_selection = Selected by Jumpers/Switches media_type = loopback_mode = 0 promiscuous_mode = 0 full_duplex = 0 multicast_address_list = CF-00-00-00-00-00 \ 01-00-5E-00-00-01 interface_number = 1
This output provides you with the following information:
The number 203 is the hardware ID (HWID) for this ethernet adapter.
The fields and values listed below the HWID are the attribute
names and their current values.
Some values may be blank if they are not
initialized by the driver.
Using this information, you know that the system
has a model DE422 ethernet adapter that has a device name of
ln0
.
You can then check the status of this network adapter using
the
ifconfig
command, as follows:
#
ifconfig ln0
ln0: flags=c62 inet XX.XXX.XXX.XX netmask ffffff00 \ broadcast XX.XXX.XX.XXX ipmtu 1500
In some cases, you can change the value of a device attribute
to modify device information or change its behavior on the system.
Setting
attributes is described in
Section 5.4.4.5.
To find which attributes
are settable, you can use the
-get
option to fetch all
attributes and use the
grep
command to search for the for
the
(settable)
keyword as follows:
#
hwmgr -get attr | grep settable
device_starvation_time = 25 (settable) device_starvation_time = 0 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable)
The output shows that there
is one settable attribute on the system,
device_starvation_time
.
Having found this, you can now obtain a list of devices that support this
attribute as follows:
#
hwmgr -get attr -a device_starvation_time
23: device_starvation_time = 25 (settable) 24: device_starvation_time = 0 (settable) 25: device_starvation_time = 25 (settable) 31: device_starvation_time = 25 (settable) 34: device_starvation_time = 25 (settable) 35: device_starvation_time = 25 (settable)
The output from this command
displays the HWID of the devices which support the
device_starvation_time
attribute.
Reading the HWID in the hierarchy output, it can be
further determined that this attribute is supported by SCSI disks.
See also the
-set attribute
and
-get category
options.
5.4.4.5 Setting Component Attributes
The
-set attribute
command option allows you to set
(or configure) the value of settable KSM attributes (within the hardware set).
Not all device attributes can be set.
When you use the
-get attribute
command option, the output will flag any attributes that can be
configured by labeling them as
(settable)
next to the attribute
value.
Finding such attributes is described in
Section 5.4.4.4.
The command syntax for setting attribute values is as follows:
/sbin/hwmgr -set attribute
[saved | current
]
{-a attribute
}
{-a attribute=...
}...
[-id hwid
]
[-member cluster-member-name
]
[-cluster
]
As demonstrated in
Section 5.4.4.4, the value
of
device_starvation_time
is an example of a settable attribute
supported by SCSI disks.
This attribute controls the amount of time that must
elapse before the disk driver determines that a device is unreachable due
to SCSI bus starvation (no data transmitted).
If the
device_starvation_time
expires before the driver is able to determine that the device
is still there, the driver will post an error event to the binary error log.
Using the following commands, you can change the value of the
device_starvation_time
attribute for the device with the HWID of
24, and then verify the new value:
#
hwmgr -set attr -id 24 -a device_starvation_time=60
#
hwmgr -get attr -id 24 -a device_starvation_time
24: device_starvation_time = 60 (settable)
This action does not change
the
saved
value for this attribute.
All attributes have
three possible values, a
current
value, a
saved
value and a
default
value.
The
default
value is a constant and can never be set.
If you never set a value
of an attribute, the default value applies.
The
saved
value
can be set and persists across boots.
You can think of it as a permanent
override of the
default
.
The
current
value can be set but does not persist
across reboots.
You can think of it as a temporary value for the attribute.
When a system is rebooted, the value of the attribute will revert to the
saved
value (if there is a
saved
value).
If
there is no
saved
value the attribute value will revert
to the
default
value.
Setting an attribute value always
changes the
current
value of the attribute.
The following
examples show how you get and set the
saved
value of an
attribute:
#
hwmgr -get attr saved -id 24 -a device_starvation_time
24: saved device_starvation_time = 0 (settable)#
hwmgr -set attr saved -id 24 -a device_starvation_time=60
saved device_starvation_time = 60 (settable)#
hwmgr -get attr saved -id 24 -a device_starvation_time
24: saved device_starvation_time = 60 (settable)
See also the
-get attribute
and
-get category
command options.
5.4.4.6 Viewing the Cluster
If you are working on a cluster, you often
need to focus hardware management commands at a particular host on the cluster.
The
-view cluster
command option enables you to obtain
details of the hosts in a cluster.
The following sample output shows a typical
cluster:
Member ID State Member HostName --------- ----- --------------- 1 UP ernie.zok.paq.com (localhost) 2 UP bert.zok.paq.com 3 DOWN bigbird.zok.paq.com
This option can also be
used to verify that the
hwmgr
utility is aware of all cluster
members and their current status.
The command has the following syntax:
/sbin/hwmgr -view cluster
The preceding example indicates a three member cluster with one member
(bigbird
) currently down.
The
(localhost)
marker tells us that
hwmgr
is currently running on cluster
member
ernie
.
Any
hwmgr
commands that
you enter using the
-cluster
option will be sent to members
bert
and
ernie
, but not to
bigbird
as that system is unavailable.
Additionally, any
hwmgr
commands that you issue using the
-member bigbird
option
will fail because the cluster member state for that host is
DOWN
.
Note that this command only works if the system is a member of a cluster.
If you attempt to run it on a single system an error message is displayed.
See also the
clu_get_info
command, and refer to the TruCluster
documentation for more information on clustered systems.
5.4.4.7 Viewing Devices
You can use
hwmgr
to display all devices that have
a device special file name, such as
/dev/disk/dsk34
using
the
-view devices
option.
The hardware manager considers
any hardware component that has the KSM attribute
dev_base_name
to be an accessible device.
(See
Section 5.4.4.4
for information
on obtaining the attributes of a device.) This command has the following syntax:
/sbin/hwmgr -view devices
[-category hardware_category
]
[-member cluster-member-name
]
[-cluster
]
This command option enables you to determine what devices are
currently registered with hardware management on a system, provides information
that enables you to access these devices through their device special file.
For example, if you load a CD-ROM into a reader, this output could be used
to determine that the CD-ROM reader should be mounted as
/dev/disk/cdrom0
.
The
-view devices
option is also useful
to find the hardware identifiers (HWID) for any registered devices.
When
you know the HWID for a device, you can use other
hwmgr
command options to query KSM attributes on the device, or perform other operations
on the device.
Typical output from this command is shown in the following example:
#
hwmgr -view dev
HWID: DSF Name Mfg Model Location ---------------------------------------------------------------------- 3: /dev/kevm 22: /dev/disk/dsk0c DEC RZ26 bus-0-targ-3-lun-0 23: /dev/disk/cdrom0c DEC RRD42 bus-0-targ-4-lun-0 24: /dev/disk/dsk1c DEC RZ26L bus-1-targ-2-lun-0 25: /dev/disk/dsk2c DEC RZ26L bus-1-targ-4-lun-0 29: /dev/ntape/tape0 DEC TLZ06 bus-1-targ-6-lun-0 35: /dev/disk/dsk8c COMPAQ RZ1CF-CF bus-2-targ-12-lun-0
The listing of devices shows all hardware components that have the
dev_base_name
attribute on the local system.
The hardware manager
attempts to resolve the
dev_base_name
to the full path
location to the device special file, such as
/dev/ntape/tape0
.
It always uses the path to the device special file with partition
c
because that partition is usually used to represent the entire
capacity of the device, except in the case of tapes.
See
Section 5.5
for information on device special file names and functions.
If you are working on a cluster, you can view all devices registered
with hardware management across the entire cluster with the
-cluster
option, as follows:
#
hwmgr -view devices -cluster
HWID: DSF Name Model Location Hostname ------------------------------------------------------------------ 20: /dev/disk/floppy0c 3.5in fdi0-unit-0 tril7e 34: /dev/disk/cdrom0c RRD46 bus-0-targ-5-lun-0 tril7e 35: /dev/disk/dsk0c HSG80 bus-4-targ-1-lun-1 tril7d 35: /dev/disk/dsk0c HSG80 bus-6-targ-1-lun-1 tril7e 36: /dev/disk/dsk1c RZ26N bus-1-targ-0-lun-0 tril7e 37: /dev/disk/dsk2c RZ26N bus-1-targ-1-lun-0 tril7e 38: /dev/disk/dsk3c RZ26N bus-1-targ-2-lun-0 tril7e 39: /dev/disk/dsk4c RZ26N bus-1-targ-3-lun-0 tril7e 40: /dev/disk/dsk5c RZ26N bus-1-targ-4-lun-0 tril7e 41: /dev/disk/dsk6c RZ26N bus-1-targ-5-lun-0 tril7e 42: /dev/disk/dsk7c RZ26N bus-1-targ-6-lun-0 tril7e 43: /dev/disk/dsk8c HSZ40 bus-3-targ-2-lun-0 tril7d 43: /dev/disk/dsk8c HSZ40 bus-3-targ-2-lun-0 tril7e 44: /dev/disk/dsk9c HSZ40 bus-3-targ-2-lun-1 tril7d 44: /dev/disk/dsk9c HSZ40 bus-3-targ-2-lun-1 tril7e 45: /dev/disk/dsk10c HSZ40 bus-3-targ-2-lun-2 tril7d 45: /dev/disk/dsk10c HSZ40 bus-3-targ-2-lun-2 tril7e
Note
that some devices, such as the disk with the HWID of
45:
,
appear more than once in this display.
These are devices that are on a shared
bus between two cluster members.
The hardware manager displays the device
entry as seen from each cluster member.
See also the following
hwmgr
command options:
-show scsi
,
-show components
, and
-get attributes
.
5.4.4.8 Viewing Transactions
Hardware management operations are transactions that need to be synchronized
across a cluster.
The
-view transaction
command option
displays the state of any hardware management transactions that have occurred
since the system was booted.
This option can be used to check for failed
hardware management transactions.
The command option has the following syntax:
/sbin/hwmgr -view transactions
If you do not specify the
-cluster
or
-member
option, the command displays status on transactions that have been
processed or initiated by the local host (the system on which the command
is entered).
Note that the
-view transaction
command is
primarily for debugging problems with hardware management in a cluster, and
you will not need to use this command very often, if ever.
The command has
the following typical output:
#
hwmgr -view trans
hardware management transaction status ----------------------------------------------------- there is no active transaction on this system the last transaction initiated from this system was: transaction = modify cluster database proposal = 3834 sequence = 0 status = 0 the last transaction processed by this system was: transaction = modify cluster database proposal = 3834 sequence = 0 status = 0 proposal last status success fail ---------------------------- ----------- ------- ---- Modify CDB/ 3838 0 3 0 Read CDB/ 3834 0 3 0 No operation/ 3835 0 1 0 Change name/ 3836 0 0 0 Change name/ 3837 0 0 0 Locate HW/ 3832 0 0 0 Scan HW/ 3801 0 0 0 Unconfig HW - confirm/ 3933 0 0 0 Unconfig HW - commit/ 3934 0 0 0 Delete HW - confirm/ 3925 0 0 0 Delete HW - commit/ 3926 0 0 0 Redirect HW - confirm/ 3928 0 0 0 Redirect HW - commit1/ 3929 0 0 0 Redirect HW - commit2/ 3930 0 0 0 Refresh - lock/ 3937 0 0 0
From this
output you can tell that the last transaction that occurred was a modification
of the cluster database.
5.4.4.9 Deleting a SCSI Device
Under some circumstances, you may want to remove a SCSI device from
a system, such as when it is logging device errors and must be replaced.
Use
the
-delete scsi
command option to remove a SCSI component
from all hardware management databases cluster-wide.
This option unregisters
the component from the kernel, removes all persistent database entries for
the device, and removes all device special files.
When you delete a SCSI
component it is no longer accessible and its device special files will be
removed from the appropriate
/dev
subdirectory.
Note that
you cannot delete a SCSI component that is currently open, and all connections
to the device (such as mounts) must be terminated.
Usually, you might delete a SCSI component if it was being removed
from your system and you did not want to have any information about the device
remaining on the system.
You might also want to delete a SCSI component
if there were software, rather than hardware problems.
For example, if the
device was operating properly but could not be accessed through the device
special file for some reason.
In this case you could delete the component
and use the
-scan scsi
command option to find and register
it as if it were a newly installed device.
To replace the SCSI device (or bring the old device back) you can use
the
-scan scsi
command option to find the device again.
However, when you delete a component and then perform a
-scan
operation to bring the component back on line, it may not be assigned the
device special file name that it previously held.
To replace a device as an
exact replica of the original, you need to perform the additional operations
described in
Section 5.4.4.11.
In addition, there is also
no guarantee that the subsequent
-scan
operation will find
the device if it is not actively responding during the bus scan.
The
-delete scsi
command option has the following
syntax:
/sbin/hwmgr -delete scsi
[-didscsi-device-identifier
]
Note that the SCSI device identifier
-did
is not equivalent to the hardware identifier (HWID).
The following examples show how you check the SCSI database and then delete a SCSI device:
#
hwmgr -show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOST- TYPE SUBTYPE OWNER PATH FILE VALID NAME PATH ----------------- ----------------------------------------- 23: 0 bert disk none 2 1 dsk0 [0/3/0] 24: 1 bert cdrom none 0 1 cdrom0 [0/4/0] 25: 2 bert disk none 0 1 dsk1 [1/2/0] 30: 4 bert tape none 0 1 tape2 [1/6/0] 31: 3 bert disk none 0 1 dsk4 [1/4/0] 34: 5 bert disk none 0 1 dsk7 [2/5/0] 35: 6 bert disk none 0 1 dsk8
In this example,
component ID 23 is currently open by a driver.
You can see this because the
DRIVER OWNER
field is not zero, Any number other than zero in the
DRIVER OWNER
field means that a driver has opened the device for
use.
Therefore, you cannot delete SCSI component 23 because it is currently
being used.
However, component ID 35 is not open by a driver, and it currently has
no valid paths shown in the
FIRST VALID PATH
field.
This
means that the device is not currently accessible and can be safely deleted.
The
/dev/disk/dsk8*
and
/dev/rdisk/dsk8*
device special files will also be deleted.
To delete the SCSI device, specify the SCSI DEVICEID value with the
-delete
option, and then review the SCSI database as follows:
#
hwmgr -del scsi -did 6
hwmgr: The delete operation was successful.#
hwmgr -show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICE HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID ID PATH ------------------------------------------------------------- 23: 0 bert disk none 2 1 dsk0 [0/3/0] 24: 1 bert cdrom none 0 1 cdrom0 [0/4/0] 25: 2 bert disk none 0 1 dsk1 [1/2/0] 30: 4 bert tape none 0 1 tape2 [1/6/0] 31: 3 bert disk none 0 1 dsk4 [1/4/0] 34: 5 bert disk none 0 1 dsk7 [2/5/0]
The
device
/dev/disk/dsk8
has been successfully deleted.
5.4.4.10 Creating a User-Defined SCSI Device Name
Most devices have an identification attribute that is a unique to the
device.
This can be read as the
serial_number
or
name
attribute of a SCSI device.
For example, the following
hwmgr
command will return both these attributes for the device HWID:
30, a SCSI disk:
#
hwmgr -get attributes -id 30 -a serial_number -a name
30: serial_number = SCSI-WWID:0c000008:0060-9487-2a12-4ed2 name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2
This string is known as a world-wide identifier (WWID) because it is unique for every device on the system.
Some older devices do not provide a unique identifier, so the operating
system will create such a number for the device using valid path
bus/target/lun
data that describes the physical location of the
device.
Because a device can be shared by more than one system (or more than
one bus) each system that has access to the device will see a different path
and will create its own unique WWID for that device.
This creates the possibility
of concurrent access to a device, and data on the device could be corrupted.
To check for such devices, use the following command:
#
hwmgr -show comp -cshared
HWID: HOSTNAME FLAGS SERVICE COMPONENT NAME ----------------------------------------------- 40: joey -cd-- iomap SCSI-WWID:04100026:"DEC \ RZ28M (C) DEC00S846590H7CCX" 41: joey -cd-- iomap SCSI-WWID:04100026:"DEC \ RZ28L-AS (C) DECJEE019480P2VSN" 42: joey -cd-- iomap SCSI-WWID:0410003a:"DEC \ RZ28 (C) DECPCB=ZG34142470 ; HDA=000034579643" 44: joey rcd-- iomap SCSI-WWID:04100026:"DEC \ RZ28M (C) DEC00S735340H6VSR" . . .
You can use
hwmgr
to create a user-defined unique
name that will in turn enable you to create a WWID that is common to all systems
that are sharing the device.
This means that the device will have a common
WWID and one set of device special file names.
The process for creating a user-defined name is as follows:
Choose the name that you want to assign. This name should be unique within the scope of all systems that have access to the device. Although it need not be as long and complex as the WWIDs shown in the preceding example, it should be sufficiently long to provide the information that you need to recognize the renamed device and differentiate it from others.
Decide what devices will use this name. When renamed, the device will be seen as the same device on all systems. You must update the systems so that the device can be seen.
Each system that shares the device will create a new WWID
using the string and use this new WWID for all subsequent registrations with
the system.
Internally, the device will still be tracked by its default WWID
(if one existed).
However, all external representations will display the
new WWID based on the user defined name.
On a cluster you must run the
-edit scsi
command option on every cluster member that has access
to the device.
Caution
All systems with access to the device should be updated. Otherwise, the access controls which ensure data coherency may not be valid and data may be corrupted.
The
-edit scsi
command option has the following
syntax:
/sbin/hwmgr -edit scsi
[-diddevice-id
]
[-uwwiduser-defined-name
]
[-membercluster-member-name
]
The following examples shows how you assign a user-defined name:
#
hwmgr -show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOST TYPE SUBTYPE OWNER PATH FILE VALID ID NAME PATH ------------------------------------------------------------ 22: 0 ftwod disk none 0 1 dsk0 [0/3/0] 23: 1 ftwod cdrom none 0 1 cdrom0 [0/4/0] 24: 2 ftwod disk none 0 1 dsk1 [1/2/0] 25: 3 ftwod disk none 2 1 dsk2 [2/4/0]
This
command displays which SCSI devices are on the system.
On this system the
administrator knows that there is a shared bus and that hardware components
24 and 25 are actually the same device.
The WWID constructed for this device
is constructed using the bus/target/lun address information.
Because the
bus/target/lun addresses are different, the device is seen as two separate
devices.
This can cause data corruption problems because two sets of device
special files can be used to access the disk (dev/disk/dsk1
and
/dev/disk/dsk2
).
The following command shows how you can rename the device, and demonstrates how it appears after being renamed:
#
hwmgr -edit scsi -did 2 -uwwid "this is a test"
hwmgr: Operation completed successfully.#
hwmgr -show scsi -did 2 -full
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ------------------------------------------------------------------------- 24: 2 ftwod disk none 0 1 dsk1 [1/2/0] WWID:0910003c:"DEC (C) DECZG41400123ZG41800340:d01t00002l00000" WWID:ff10000e:"this is a test" BUS TARGET LUN PATH STATE ------------------------------ 1 2 0 valid
The operation is repeated on the other
device path and the same name is given to the device at address
2/4/0
.
When this is done hardware management will use the user
defined name to track the device and recognize it as an alternate path to
the same device:
#
hwmgr -edit scsi -did 3 -uwwid "this is a test"
hwmgr: Operation completed successfully.#
hwmgr -show scsi -did 3 -full
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ------------------------------------------------------------------------- 25: 3 ftwod disk none 0 1 dsk1 [2/4/0] WWID:0910003c:"DEC (C) DECZG41400123ZG41800340:d02t00004l00000" WWID:ff10000e:"this is a test" BUS TARGET LUN PATH STATE ------------------------------ 2 4 0 valid
Both of these devices now use device
special file name
/dev/disk/dsk1
and there is no longer
a danger of data corruption as a result of two sets of device special files
accessing the same disk.
5.4.4.11 Replacing a Failed SCSI Device
When a SCSI device fails, you may want to replace
it in such a way that the replacement disk takes on hardware characteristics
of the failed device, such as ownership of the same device special files.
The
-redirect
command option enables you to assign such
characteristics.
For example, if you have an HSZ (RAID) cabinet and a disk
fails, you can hot-swap the failed disk and then use the
-redirect
command option to bring the new disk on line as a replacement for
the failed disk.
Note
The replacement device must be of the same device type for the
-redirect
operation to work.
This command has the following syntax:
/sbin/hwmgr -redirect scsi
[-src scsi-device-id
]
[-dest scsi-device-id
]
The following examples show how you use the
-redirect
option:
#
/sbin/hwmgr -show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICE- HOST- TYPE SUB- OWNER PATH FILE VALID ID NAME TYPE PATH --------------------------------------------------------- 23: 0 fwod disk none 2 1 dsk0 [0/3/0] 24: 1 fwod cdrom none 0 1 cdrom0 [0/4/0] 25: 2 fwod disk none 0 1 dsk1 [1/2/0] 30: 4 fwod tape none 0 1 tape2 [1/6/0] 31: 3 fwod disk none 0 1 dsk4 37: 5 fwod disk none 0 1 dsk10 [2/5/0]
This output shows a failed SCSI disk of HWID 31.
The device has no valid
paths.
To replace this failed disk with a new disk that has device special
file name
/dev/disk/dsk4
, and the same
dev_t
information, use the following procedure:
Install the device as described in the hardware manual.
Use the following command to find the new device:
#
/sbin/hwmgr -scan scsi
This
command probes the SCSI subsystem for new devices and registers those devices.
You can then repeat the
-show scsi
command and obtain the
SCSI device id of the replacement device.
Use the following command to reassign the device characteristics
from the failed disk to the replacement disk.
This example assumes that the
SCSI device id (did
) assigned to the new disk is 36:
#
/sbin/hwmgr -redirect scsi -src 3 -dest 36
5.4.4.12 Viewing the name Persistence Database
The name persistence database stores information
about the hardware topology of the system.
This data is maintained by the
kernel and includes data for controllers and buses in addition to devices.
Use the
-show -name
command option to display persistence
data which you can then manipulate using other
hwmgr
commands.
The command has the following syntax:
/sbin/hwmgr -show name
[-member cluster-member-name
]
The following example shows typical output from the
-show -name
command option on a small system:
#
hwmgr -show name -member ychain
HWID: NAME HOSTNAME PERSIST TYPE PERSIST AT ----------------------------------------------------- 13: isp0 ychain BUS pci0 slot 5 4: pci0 ychain BUS nexus 14: scsi0 ychain CONTROLLER isp0 slot 0 29: tu0 ychain CONTROLLER pci0 slot 11
The following information is provided by the output:
HWID:
- The unique hardware identifier
for this device.
This can also be determined by the
-view hierarchy
command option.
NAME
- The device name and the instance
number such as
pci0
for personal computer interconnect
(PCI) bus 0.
Each additional PCI bus will have a different instance number.
HOSTNAME
- The host on which the
command was run.
When working in a cluster you can specify the cluster name
on which the command is to operate.
PERSIST TYPE
- The type of hardware
component, which can be a bus, controller, or device.
PERSIST AT
- The logical address
of the device, which may map to a physical location in the hardware.
For example,
the SCSI controller
scsi0
persists at
slot 0
of the bus
isp0.
5.4.4.13 Deleting and Removing a Name from the Persistence Database
One of the options for manipulating the name subsystem is to remove
devices from the persistence database.
The
hwmgr
utility
offers two methods of removal:
-remove
- Use this option to take
an entry out of the persistence database.
This will not affect the running
system, but at the next reboot, the device will no longer be seen.
-delete
- Use this option to take
an entry out of the persistence database and delete it from the running system.
This command will unregister and unconfigure the device, removing it from
all hardware management databases.
These commands have the following syntax:
/sbin/hwmgr -remove name
[-entry name
]
/sbin/hwmgr -delete name
[-entry name
]
Where
name
is the device name shown
in the output from the
-show name
command option described
in
Section 5.4.4.12
The following example shows typical output from the
-show name
command option on a small system:
#
hwmgr -show name
HWID: NAME HOSTNAME PERSIST TYPE PERSIST AT ------------------------------------------------------ 33: aha0 fegin BUS eisa0 slot 7 31: ln0 fegin CONTROLLER eisa0 slot 5 8: pci0 fegin BUS ibus0 slot 0 34: scsi1 fegin CONTROLLER aha0 slot 0 17: scsi0 fegin CONTROLLER psiop0 slot 0 15: tu0 fegin CONTROLLER pci0 slot 0
Note that there
are two
scsi
adapters shown.
If
scsi0
is the target of a
-remove
operation then
scsi1
would not become
scsi0
.
The information of
where the adapter is located persists at
aha0 slot 0
and
the name
scsi1
is saved across boots.
To remove
scsi0
and rename
scsi1
you would use the following commands:
#
hwmgr -remove name -ent scsi0
#
hwmgr -edit name -ent scsi1 -parent_num 0
5.5 Device Naming and Device Special Files
Devices are made available to the rest of the system through device
special files located in the
/dev
directory.
A device special
file enables an application (such as a database application) to access a device
through its device driver, which is a kernel module that controls one or more
hardware components of a particular type.
For example, network controllers
, graphics controllers, and disk devices (including CD-ROM devices).
See
Section 5.4
for a discussion of system components.
Device special files
are also used to access pseudodevice drivers that do not control a hardware
component, for example, a pseudoterminal (pty
) terminal
driver, which simulates a terminal device.
The
pty
terminal
driver is a character driver typically used for remote logins; it is described
in
Section 5.6.
(For detailed information on device drivers
refer to the device driver documentation.)
Normally, device
special file management is performed automatically by the system.
For example,
when you install a new version of the UNIX operating system, there is a point
at which the system probes all buses and controllers and all the system devices
are found.
The system then builds databases that describe the devices and
creates device special files which make them available to users.
The most
common way that you use a device special file is to specify it as the location
of a UFS file system in the system
/etc/fstab
file, which
is documented in
Chapter 6.
You only need to perform manual operations on device special files when there are problems with the system or when you need to support a device that cannot be handled automatically. The following sections describes the way that devices and device special files are named and organized in Version 5.0 or higher. See Appendix B for information on other supported device mnemonics for legacy devices and their associated device names.
Note the following:
A current device special file for a SCSI device has the format
/dev/disk/disk13a
for SCSI disks and
/dev/ntape/tape0_d0
for SCSI tapes.
A SCSI device special file in the format
/dev/rz10b
is a legacy device special file.
The following sections
differentiate between
current
and
legacy
device special files.
You may also see these referred to as old
(legacy) and new (current) device names in some scripts and utilities.
First
time users of the operating system need not be concerned with legacy device
special file names except where there is a need to use third-party drivers
and devices that do not support the current naming model.
(The structure of
a device special file will be described in detail later in this section.)
There is currently one device special file naming model for SCSI disk and tape devices and a different model for all other devices. The naming system for SCSI disk and tape devices will be extended to the other devices in future releases. This ensures that there is continued support for legacy devices and device names on a nonclustered system. Applications and utilities will support all device names or will display an error message informing you of which device name format is supported.
Legacy device names and device special files will be maintained
for some time and their retirement schedule will be announced in a future
release.
5.5.1 Related Documentation and Utilities
The following documents contain information about device names:
Books:
Chapter 6 contains information about context dependent symbolic links (CDSLs). Some directories that contain device special files are CDSLs; you should be familiar with this concept before you read this section.
Reference pages and utilities:
The
dsfmgr
(8)
reference page describes the utility used to
manage device special files.
The
MAKEDEV
(8)
reference page describes the utility
used to manage legacy device special files, if you need to create
rz*
format device special files.
The
disklabel
(8)
reference page describes the utility used to maintain disk pack labels.
The
diskconfig
(8)
reference page describes how to invoke the Disk Configuration interface, a
graphical disk management tool that provides additional features over
disklabel
in that you can use it to partition disks and create file
systems on the disks in a single operation.
You can also launch the Disk Configuration
interface from the CDE Application Manager - System_Admin folder.
The Disk
icon is located in the Configuration folder.
Online help describes how to
use this interface.
5.5.2 Device Special File Directories
You should be familiar with the file system hierarchy described in Chapter 6, in particular the implementation of Context Dependent Symbolic Links (CDSLs). CDSLs enable some devices to be available cluster-wide, when a system is part of a cluster.
For device special files, a
/devices
directory exists
under
/
(root).
This directory contains subdirectories
that each contain device special files for a class of devices.
A class of
device corresponds to related types of devices, such as disks or nonrewind
tapes.
For example, the directory
/dev/disk
contains files
for all supported disks, and
/dev/ntape
contains device
special files for nonrewind tape devices.
Currently, only the subdirectories
for certain classes have been created.
The available classes are defined in
Appendix B.
Note that in all operations you will need to specify
paths using the
/dev
directory and not the
/devices
directory.
From the
/dev
directory, there are symbolic links
to corresponding subdirectories to the
/devices
directory.
For example:
lrwxrwxrwx 1 root system 25 Nov 11 13:02 ntape ->
../../../../devices/ntape
lrwxrwxrwx 1 root system 25 Nov 11 13:02 rdisk ->
../../../../devices/rdisk
lrwxrwxrwx 1 root system 24 Nov 11 13:02 tape ->
../../../../devices/tape
This structure enables certain devices to be host-specific when the
system is a member of a cluster.
It enables other devices to be shared between
all members of a cluster.
In addition, new classes of devices can be added
by device driver developers and component vendors.
5.5.2.1 Legacy Device Special File Names
According to legacy
device naming conventions, all device special files are stored in the
/dev
directory.
The device special file names indicate the device
type, its physical location, and other device attributes.
Examples of the
file name format disk and tape device special file names that use the legacy
conventions are
/dev/rz14f
for a SCSI disk and
/dev/rmt0a
.
The name contains the following information:
/path/prefix{root_name}{unit_number}{suffix} /dev/ rmt 0 a /dev/ r rz 4 c /dev/ n rmt 12 h
This information is interpreted as follows:
The
path
is the directory for device special
files.
All device special files are placed in the
/dev
directory.
The prefix differentiates one set of device special files for the same physical device from another set, as follows:
r
- Indicates a character (raw) disk
device.
Device special files for block devices have no prefix.
n
- Indicates a no rewind on close
tape device.
Device special files for rewind on close tape devices have no
prefix.
The
root_name
is the two or three-character
driver name, such as
rz
for SCSI disk devices, or
rmt
for tape devices.
The unit_number is the unit number of the device, as follows:
For SCSI disks, the unit number is calculated with the formula:
unit = (bus * 8) + target
For HSZ40 and HSZ10 disk devices, a letter can precede
the unit number to indicate the LUN, where
a
is LUN 0,
b
is lun 1, and so on.
You do not need to include the letter
a
for LUN 0, it is the default.
For tapes, the prefix is a sequential number from 0 - 7.
The suffix differentiates multiple device special files for the same physical device, as follows:
Disks use the letters
a
through
h
to indicate partitions.
In all, 16 files are created for each
disk device: 8 for character device partitions
a
through
h
, 8 for block device partitions
a
through
h
.
Tapes
use suffixes to indicate tape densities.
Up to 8 files are created for each
tape device: two for each density, using the suffixes defined in
Table 5-1.
Table 5-1: Tape Device Suffix for Legacy Device Special Files
Suffix | Description |
a |
QIC-24 density for SCSI QIC devices. |
l |
The lowest density supported by the device, or QIC-120 density for SCSI QIC devices. |
m |
Medium density when a drive is triple density, or QIC-150 density for SCSI QIC devices. |
h |
The highest density supported by the device, or QIC-320 density for SCSI QIC devices. |
Legacy device naming conventions are supported so that scripts will
continue to work as expected.
However, features available with the current
device naming convention may not work with the legacy naming convention.
When
Version 5.0 or higher is installed, none of the legacy device special files
(such as
rz13d
) will be created during the installation.
If you determine that legacy device special file naming is required, you
will need to create the legacy device names using the appropriate commands
described in
dsfmgr
(8).
Note that some devices will not support legacy
device special files.
5.5.2.2 Current Device Special File names
Current device special files imply
abstract device names and convey no information about the device architecture
or logical path to the device.
The new device naming convention consists of
a descriptive name for the device and an instance number.
These two elements
form the basename of the device as shown in
Table 5-2.
Table 5-2: Sample Current Device Special File Names
Location in
/dev |
Device Name | Instance | Basename |
/disk |
dsk |
0 | dsk0 |
/rdisk |
dsk |
0 | dsk0 |
/disk |
cdrom |
1 | cdrom1 |
/tape |
tape |
0 | tape0 |
A combination of the device name, with an system-assigned instance
number creates a basename such as
dsk0
.
The current device special files are named according to the basename of the devices, and include a suffix that conveys more information about the device being addressed. This suffix will differ depending on the type of device, as follows:
Disks - These device file names consist of the basename
and a suffix from
a
through
z
.
For
example,
dsk0a
.
Disks use a through h to identify partitions.
By default, CD-ROM and floppy disk devices use only the letters
a
and
c
only.
For example,
cdrom1c
and
floppy0a
.
The same device names exist in the class directory
/dev/rdisk
for raw devices.
Tapes -
These device file names have the basename and a suffix comprised of the characters
_d
followed by an integer.
For example
tape0_d0
.
This suffix determines the density of the tape device, according to the entry
for the device in the
/etc/ddr.dbase
file.
For example:
Device | Density |
tape0 |
Default density |
tape0c |
Default density with compression |
tape0_d0 |
Density associated with entry 0 in
/etc/ddr.dbase |
tape0_d1 |
Density associated with entry 1 in
/etc/ddr.dbase |
Note that with the new device special file naming, there is a direct mapping from the legacy tape device name suffix to the current name suffix as follows:
Legacy Device Name Suffix | Current Suffix |
l (low) | _d0 |
m (medium) | _d2 |
h (high) | _d1 |
a (alternate) | _d3 |
There are two sets of device names for tape that both conform to the
current naming convention.
The
/dev/tape
directory for
rewind devices and the
/dev/ntape
directory (for no rewind).
To determine which device special file to use, you can look in the
/etc/ddr.dbase
file.
5.5.2.3 Converting Device Special File Names
If you have shell scripts that use commands which act on device special files, you should note that any command or utility supplied with the operating system operates on current and legacy file names in one of the following ways:
The program will accept both forms of device name.
Only the current device names will be supported by the program. This means that if you use legacy device names, you will not be able to use these utilities.
Only the old device names will be supported. This means that if you use current device names, you will not be able to use these utilities.
Note however than no device can use both forms of device names simultaneously. You should test any shell scripts, and if necessary refer to the individual reference pages or on-line help for a utility.
If you want to update scripts, translating legacy names to the equivalent
current name is a simple process.
Table 5-3
shows some examples
of legacy device names and corresponding current device names.
Note that
there is no relationship between the instance numbers.
A device that was associated
with device special file
/dev/rz10b
may be associated
with
/dev/disk/dsk2b
under the current system.
Using these names as examples, you should be able to translate most
device names that appear in your scripts.
You can also use the utility
dsfmgr
(8)
to convert device names.
Table 5-3: Sample Device Name Translations
Legacy Device Special File Name | New Device Special File Name |
/dev/rmt0a |
/dev/tape/tape0 |
/dev/rmt1h |
/dev/tape/tape1_d1 |
/dev/nrmt0a |
/dev/ntape/tape0_d0 |
/dev/nrmt3m |
/dev/ntape/tape3_d2 |
/dev/rz0a |
/dev/disk/dsk0a |
/dev/rz10g |
/dev/disk/dsk10g |
/dev/rrz0a |
/dev/rdisk/dsk0a |
/dev/rrz10b |
/dev/rdisk/dsk10b |
5.5.3 Managing Device Special Files
In most cases, the management of device special files is undertaken
by the system itself.
During the initial full installation of the operating
system, the device special files are created for every SCSI disk and SCSI
tape device found on the system.
If the system was updated from a previous
version using the update installation procedure, both the current device special
files and the legacy device files will exist.
However, if you subsequently
add new SCSI devices
dsfmgr
will only create new device
special files by default.
When the system is rebooted,
dsfmgr
is called automatically during the boot sequence to create the new device
special files for the device.
The system also automatically creates the device
special files that it requires for pseudodevices such as
ptys
(pseudoterminals).
When you add a SCSI disk or tape device to the system, the new device
will be located automatically, added to the hardware management databases,
and its device special files will be created.
On the first reboot after installation
of the new device,
dsfmgr
is called automatically during
the boot sequence to create the new device special files for that device.
However, under certain circumstances, you may need to perform manual
administration of device special files, such as creating legacy device special
files or verifying the device databases.
The utility named
dsfmgr
enables you to manage device special files.
Some devices or some
system configuration changes may require the manual creation of a device special
file.
To support applications that will only work with legacy device names, you may need to manually create the legacy device special files, either for every existing device, or for only for recently-added devices. Note however that some recent devices using features such as Fibre Channel will only support the current special device file naming convention.
The following sections describe some typical uses of
dsfmgr
.
Refer to the
dsfmgr
(8)
reference page for detailed information
on the command syntax.
The system script file
/sbin/dn_setup
,
which runs at boot time to create device special files, provides an example
of a script that uses
dsfmgr
command options.
5.5.3.1 Using dn_setup to Perform Generic Operations
The script
/sbin/dn_setup
script runs automatically
at system start up to create device special file names.
Normally, you will
not need to use
dn_setup
options, however they will be
useful if you need to troubleshoot device name problems or restore a damaged
special device file directory or database files.
See also
Section 5.5.3.3.
If you frequently change your system configuration or install different versions of the operating system you may see device-related error messages at the system console during system start up. These messages might indicate that the system was unable to assign device special file names. This problem can occur when the saved configuration does not map to the current configuration. Adding or removing devices between installations can also cause the problem.
The command syntax is as follows:
/sbin/dn_setup
[-sanity_check
]
[-boot
]
[-default
]
[-clean
]
[-default_config
]
[-init
]
The
dn_setup
script has the following
functions.
Generally, only the
-sanity_check
option is useful
to administrators.
The remaining options should be used under the guidance
of technical support for debugging and problem solving:
Verifies the consistency and currency of
the device special files and the directory hierarchy.
The message
Passed
is displayed if the check is successful.
Runs at boot time to create all the default device special databases, files, and directories.
Creates only the required device special directories.
Deletes everything in the device special directory tree and re-creates the entire tree (including device special files).
Creates only the class and category databases.
Removes all the default device special databases, files, and directories and re-creates everything.
5.5.3.2 Displaying Device Classes and Categories
Any individual
type of device on the system is identified in the Category to Class-Directory,
Prefix Database file,
/etc/dccd.dat
.
You can display information
in these databases using
dsfmgr
This information enables
you to find out what devices are on a system, and obtain device identification
attributes that can be used with other
dsfmgr
command options.
For example, a class of devices have related physical characteristics, such
as being disk devices.
Each class of devices has its own directory in
/dev
such as
/dev/ntape
for nonrewind tape devices.
Device classes are stored in the Device Class Directory Default Database file,
/etc/dcdd.dat
.
To view the entries in these databases, you use the following command:
#
/sbin/dsfmgr -s
dsfmgr: show all datum for system at / Device Class Directory Default Database: # scope mode name -- --- ---- ----------- 1 l 0755 . 2 c 0755 disk 3 c 0755 rdisk 4 c 0755 tape 5 c 0755 ntape 6 l 0755 none Category to Class-Directory, Prefix Database: # category sub_category type directory iw t mode prefix -- -------------- -------------- ---------- --------- -- - ---- -------- 1 disk cdrom block disk 1 b 0600 cdrom 2 disk cdrom char rdisk 1 c 0600 cdrom 3 disk floppy block disk 1 b 0600 floppy 4 disk floppy char rdisk 1 c 0600 floppy 5 disk floppy_fdi block disk 1 b 0666 floppy 6 disk floppy_fdi char rdisk 1 c 0666 floppy 7 disk generic block disk 1 b 0600 dsk 8 disk generic char rdisk 1 c 0600 dsk 9 parallel_port printer * . 1 c 0666 lp 10 pseudo kevm * . 0 c 0600 kevm 11 tape * norewind ntape 1 c 0666 tape 12 tape * rewind tape 1 c 0666 tape 13 terminal hardwired * . 2 c 0666 tty 14 * * * none 1 c 0000 unknown Device Directory Tree: 12800 2 drwxr-xr-x 6 root system 2048 May 23 09:38 /dev/. 166 1 drwxr-xr-x 2 root system 512 Apr 25 15:58 /dev/disk 6624 1 drwxr-xr-x 2 root system 512 Apr 25 11:37 /dev/rdisk 180 1 drw-r--r-- 2 root system 512 Apr 25 11:39 /dev/tape 6637 1 drw-r--r-- 2 root system 512 Apr 25 11:39 /dev/ntape 181 1 drwxr-xr-x 2 root system 512 May 8 16:48 /dev/none Dev Nodes: 13100 0 crw------- 1 root system 79, 0 May 8 16:47 /dev/kevm 13101 0 crw------- 1 root system 79, 2 May 8 16:47 /dev/kevm.pterm 13102 0 crw-r--r-- 1 root system 35, 0 May 8 16:47 /dev/tty00 13103 0 crw-r--r-- 1 root system 35, 1 May 8 16:47 /dev/tty01 13104 0 crw-r--r-- 1 root system 34, 0 May 8 16:47 /dev/lp0 169 0 brw------- 1 root system 19, 17 May 8 16:47 /dev/disk/dsk0a 6627 0 crw------- 1 root system 19, 18 May 8 16:47 /dev/rdisk/dsk0a 170 0 brw------- 1 root system 19, 19 May 8 16:47 /dev/disk/dsk0b 6628 0 crw------- 1 root system 19, 20 May 8 16:47 /dev/rdisk/dsk0b 171 0 brw------- 1 root system 19, 21 May 8 16:47 /dev/disk/dsk0c
.
.
.
This display provides you with information that can be used with other
dsfmgr
commands.
(Refer to the
dsfmgr
(8)
reference page for
a complete description of the fields in the databases).
For example:
class
- The device class such as
disk
(a block device),
rdisk
(a character device),
or
tape
, a rewind device.
This information can be used
with the
dsfmgr -a
(add) or
dsfmgr -r
(remove) command options to add or remove classes.
category
- The primary description
of a device.
For example, SCSI disks, CD-ROM readers and floppy disk readers
are all in the
disk
category.
This information can be used
with the
dsfmgr -a
(add) or
dsfmgr -r
(remove) command options to add or remove categories.
5.5.3.3 Verifying and Fixing the Databases
Under unusual circumstances, the device databases
may be corrupted or device special files may accidentally be removed from
the system.
You may see errors indicating that a device is no longer available,
but the device itself does not appear to be faulty.
If you suspect that there
may be a problem with the device special files, you can check the databases
using the
dsfmgr -v
(verify) command option.
Caution
If you see error messages at system start up that indicate a device naming problem, you should use the verify command only to enable you to proceed with the boot. Check your system configuration before and after verifying the databases. The verification procedure will fix most errors and enable you to proceed, however it will not cure any underlying device or configuration problems.
Such problems are rare and usually arise when performing unusual operations such as switching between boot disks. Errors generally mean that the system was unable to recover and use a good copy of the previous configuration, and errors usually arise because the current system configuration no longer matches the database.
As for all potentially destructive system operations, you should always be able to restore the system to its identical previous configuration, and to restore the previous version of the operating system from your backup.
For example, if you attempted to configure the floppy disk device to
use the
mtools
utilities, and you found that you could
not access the device, you would use the following command:
#
/sbin/dsfmgr -v
dsfmgr: verify all datum for system at / Device Class Directory Default Database: OK. Device Category to Class Directory Database: OK. Dev directory structure: OK. Dev Nodes: ERROR: device node does not exist: /dev/disk/floppy0a ERROR: device node does not exist: /dev/disk/floppy0c Errors: 2 Total errors: 2
This output shows that the device special files
for the floppy disk device are missing.
To correct this problem, use the same
command with the
-F
(fix) flag to correct the errors as
follows:
#
/sbin/dsfmgr -v -F
dsfmgr: verify all datum for system at / Device Class Directory Default Database: OK. Device Category to Class Directory Database: OK. Dev directory structure: OK. Dev Nodes: WARNING: device node does not exist: /dev/disk/floppy0a WARNING: device node does not exist: /dev/disk/floppy0c OK. Total warnings: 2
Notice that the
ERROR
changes
to a
WARNING
, which indicates that the device special files
for the floppy disk were created automatically.
Repeating the
dsfmgr
-v
command will then show no errors.
5.5.3.4 Deleting Device Special Files
If a device is permanently removed from the system,
you may want to remove its device special file so that it can be reassigned
to another type of device.
Use the
dsfmgr -D
command option
to remove device special files as shown in the following example:
#
cd /dev/disk
#
ls
cdrom0a dsk0a dsk0c dsk0e dsk0g floppy0a cdrom0c dsk0b dsk0d dsk0f dsk0h floppy0c#
/sbin/dsfmgr -D cdrom0*
-cdrom0a -cdrom0a -cdrom0c -cdrom0c # ls dsk0a dsk0c dsk0e dsk0g floppy0a dsk0b dsk0d dsk0f dsk0h floppy0c
Notice that the output
from
ls
shows that there are device special files for
cdrom0
.
Running
dsfmgr -D
on all
cdrom
devices, as shown by the wildcard symbol (*), causes all device
special files for that sub_category to be permanently deleted.
The message
that follows repeats the basename (cdrom0
) twice, because
it also deletes the device special files from the
/dev/rdisk
directory where the raw or character device special files were located.
Note that if device special files are deleted in error, and no hardware changes are made then they can be recreated as follows:
#
/sbin/dsfmgr -n cdrom0a
+cdrom0a +cdrom0a#
/sbin/dsfmgr -n cdrom0c
+cdrom0c +cdrom0c
5.5.3.5 Moving and Exchanging Device Special File Names
You may want to reassign the device special
files between devices using the
dsfmgr -m
(move) command
option.
It is also possible to exchange the device special files of one device
for those of another device using the-e
option.
The syntax
for this command option is as follows:
/sbin/dsfmgr
[-e-mbasename_1 [ basename_2 instance ]
]
Where:
basename_1
is the prefix and instance
number of the source device such as
dsk0
or
tape7
basename_2
is the prefix and instance
number of the target device, such as
dsk0
or
tape7
instance is just the device name and instance number of the target device.
For example:
#
/sbin/dsfmgr -m dsk0 dsk10
#
/sbin/dsfmgr -e dsk1 15
5.6 Manually Configuring Devices Using ddr_config
Most device management is automatic.
A device added
to a system will be recognized, mapped, and added to the device databases
as described in
Section 5.4.
However, you may sometimes
need to add devices that cannot be detected and added to the system automatically.
These devices may be old, or new prototypes, or they may not adhere closely
to supported standards such as SCSI.
In these cases, you must manually configure
the device and its drivers in the kernel, using the
ddr_config
utility described in this section.
The following sections describe how to
create pseudoterminals (ptys
), a terminal pseudodevice
that enables remote logins.
There are two processes you use to effect the reconfiguration and rebuilding of a kernel: a static method and a dynamic method.
The dynamic method uses the
ddr
utility to rebuild the kernel and effect the disk configuration changes without
shutting down the operating system.
The static method uses the
MAKEDEV
and
config
utilities and requires that you shut down the
operation and restart it in order to rebuild the kernel and effect the changes.
The
MAKEDEV
command
or the
mknod
command is used to create device special files
instead of the
dsfmgr
utility.
The
kmknod
command creates device special files for third-party kernel layered products.
Refer to
MAKEDEV
(8),
mknod
(8), and
kmknod
(8)
for more information.
For loadable
drivers, the
sysconfig
command creates the device special
files by using the information specified in the driver's stanza entry in the
/etc/sysconfigtab
database file.
5.6.1 Dynamic Method to Reconfigure the Kernel
The following sections explain how
to use the
ddr_config
utility to manage the DDR database
for your system.
These
sections introduce DDR, then describe how you use the
ddr_config
utility to:
5.6.1.1 Understanding Dynamic Device Recognition
Dynamic Device Recognition is a framework for describing the operating parameters and characteristics of SCSI devices to the SCSI CAM I/O subsystem. You can use DDR to include new and changed SCSI devices into your environment without having to reboot the operating system. You do not disrupt user services and processes, as happens with static methods of device recognition.
DDR is preferred over the static method for recognizing SCSI devices.
The current, static method, as described in
Chapter 4,
is to edit the
/sys/data/cam_data.c
data file and include
custom SCSI device information, reconfigure the kernel, and shut down and
reboot the operating system.
Note
Support for the static method of recognizing SCSI devices will be retired in a future release.
Both methods can be employed on the same system, with the restriction that the devices described by each method are exclusive to that method (nothing is doubly-defined).
The information DDR provides about SCSI devices is needed by SCSI drivers.
You can supply this information using DDR when you add new SCSI devices to
the system, or you can use the
/sys/data/cam_data.c
data
file and static configuration methods.
The information provided by DDR and
the
cam_data.c
file have the same objectives.
When compared
to the static method of providing SCSI device information, DDR minimizes the
amount of information that is supplied by the device driver or subsystem to
the operating system and maximizes the amount of information that is supplied
by the device itself or by defaults specified in the DDR databases.
5.6.1.1.1 Conforming to Standards
Devices you add to the system should minimally conform to the SCSI-2
standard, as specified in
SCSI-2, Small Computer System Interface-2 (X3.131-1994), or other variants of the standard documented in
the
Software product Description.
If your devices do
not comply with the standard, or if they require exceptions from the standard,
you store information about these differences in the DDR database.
If the
devices comply with the standard, there is usually no need to modify the database.
Note however that such devices should be automatically recognized or configurable
using
hwmgr
.
5.6.1.1.2 Understanding DDR Messages
Following are the most common DDR message categories and the action, if any, that you should take.
Console messages are displayed during the boot sequence.
Frequently, these messages indicate that the kernel cannot read the DDR database. This error occurs when the system's firmware is not at the proper revision level. Upgrade to the correct revision level of the firmware.
Console messages warn about corrupted entries in the database. Recompile and regenerate the database.
Runtime messages generally indicate syntax errors that are
produced by the
ddr_config
compiler.
The compiler runs
when you use the
-c
option to the
ddr_config
utility and does not produce an output database until all syntax
errors have been corrected.
Use the
-h
option to the
ddr_config
command to display help on command options.
5.6.2 Changing the DDR Database
When
you make a change to the operating parameters or characteristics of a SCSI
device, you must describe the changes in the
/etc/ddr.dbase
file.
You must compile the changes by using the
ddr_config -c
command.
Two common reasons for changes are:
Your device deviates from the SCSI standard or reports something different from the SCSI standard
You want to optimize device defaults, most commonly the
TagQueueDepth
parameter, which specifies the maximum number of active
tagged requests the device supports
You use the
ddr_config
-c
command to compile the
/etc/ddr.dbase
file
and produce a binary database file,
/etc/ddr.db
.
When the
kernel is notified that the file's state has changed, it loads the new
/etc/ddr.dbase
file.
In this way, the
SCSI CAM I/O subsystem is dynamically updated with the changes that you made
in the
/etc/ddr.dbase
file and the contents of the on-disk
database are synchronized with the contents of the in-memory database.
Use the following procedure to compile the
/etc/ddr.dbase
database:
Log in as root or become the superuser.
Enter the
ddr_config -c
command, for example:
#
/sbin/ddr_config -c
Note that there is no message confirming successful completion.
When the prompt is displayed, the compilation is complete.
If there are syntax
errors, they are displayed at standard output and no output file is compiled.
5.6.3 Converting Customized cam_data.c Information
You use the following procedure to
transfer customized information about your SCSI devices from the
/sys/data/cam_data.c
file to the
/etc/ddr.dbase
text database.
In this example,
MACHINE
is the
name of your machine's system configuration file.
Log on as root or become the superuser.
To produce a summary of the additions and modifications that
you should make to your
/etc/ddr.dbase
file, enter the
ddr_config -x
command.
For example:
#
/sbin/ddr_config -x MACHINE > output.file
This command uses as input the system configuration file that you used to build your running kernel. The procedure runs in multiuser mode and requires no input after it has been started. You should redirect output to a file in order to save the summary information. Compile errors are reported to standard error and the command terminates when the error is reported. Warnings are reported to standard error and do not terminate the command.
Edit the characteristics that are listed on the output file
into the
/etc/ddr.dbase
file, following the syntax requirements
of that file.
Instructions for editing the
/etc/ddr.dbase
database are found in
ddr.dbase
(4).
Enter the
ddr_config -c
command to compile
the changes.
See Section 5.6.2 for more information.
You can add pseudodevices, disks, and tapes statically, without using
DDR, by using the methods described in the following sections.
5.6.4 Adding Pseudoterminals and Devices Without Using DDR
System V Release 4 (SVR4) pseudoterminals (ptys) are implemented by default and are defined as follows:
/dev/pts/N
The variable N is a number from 0-9999.
This implementation allows for more scalability than the BSD ptys (tty[a-zA-Z][0-9a-zA-Z]).
The base system commands and utilities have been modified to support both
SVR4 and BSD ptys.
To revert back to the original default behavior, create
the BSD ptys using
MAKEDEV
.
See also the
SYSV_PTY
(8),
pty
(7),
and
MAKEDEV
(8)
reference pages.
5.6.4.1 Adding Pseudoterminals
Pseudoterminals enable users to use the network to access a system.
A pseudoterminal is a pair of character devices that emulate a hardware terminal
connection to the system.
Instead of hardware, however, there is a master
device and a slave device.
Pseudoterminals, unlike terminals, have no corresponding
physical terminal port on the system.
Remote login sessions, window-based
software, and shells use pseudoterminals for access to a system.
By default,
SVR4 device special files such as
/dev/pts/
<n>
are created.
You must use
/dev/MAKEDEV
to create
BSD pseudoterminals such as
/dev/ttyp/
<n>.
Two implementations of pseudoterminals are offered: BSD STREAMS
and BSD
clist
.
For some installations, the default number of
pty
devices is adequate.
However, as your user community grows, and each user
wants to run multiple sessions of one or more timesharing machines in your
environment, the machines may run out of available
pty
lines.
The following command enables you to review the current value:
#
sysconfig -q pts
pts: nptys = 255
You can dynamically change the value
with the
sysconfig
command, although this change will not
be preserved across reboots:
#
sysconfig -r pts nptys=400
To modify the value and preserve it across reboots, use the following procedure:
Log in as root.
Add or edit the pseudodevice
entry in the system configuration file
/etc/sysconfigtab
.
By default, the kernel supports 255 pseudoterminals.
If you add more pseudoterminals
to your system, you must edit the system configuration file entry and increment
the number 255 by the number of pseudoterminals you want to add.
The following
examples show that 400 pseudoterminals have been added.
pts:
nptys=400
The pseudodevice entry for
clist-
based pseudoterminals
is as follows:
pseudo-device pty 655
For more information on the configuration file and its pseudodevice keywords, refer to Chapter 4.
For
clist-
based pseudoterminals, you also
need to rebuild and boot the new kernel.
Use the information on rebuilding
and booting the new kernel in
Chapter 4.
When the system is first installed, the configuration file contains a pseudodevice entry with the default number of 255 pseudoterminals. If for some reason the number is deleted and not replaced with another number, the system defaults to supporting the minimum value of 80 pseudoterminals. The maximum value is 131072.
If you want to create BSD terminals, use the
/dev/MAKEDEV
command as follows:
Log in as root and change to the
/dev
directory.
Create
the device special files by using the
MAKEDEV
command,
which has the following syntax:
./MAKEDEV pty#
The number sign ( #
) represents
the set of pseudoterminals (0 to 101) you want to create.
The first 51 sets
(0 to 50) create 16 pseudoterminals for each set.
The last 51 sets (51 to
101) create 46 pseudoterminals for each set.
You can use the following syntax
to create a large number of pseudoterminals:
./MAKEDEV PTY_#
The number sign ( #
) represents
the set of pseudoterminals (1 to 9) you want to create.
Each set creates 368
pseudoterminals, except the
PTY_3
and
PTY_9
sets, which create 356 and 230 pseudoterminals, respectively.
(Refer to the
Software Product Description (SPD) for the maximum number of supported pseudoterminals).
Note
By default, the installation software creates device special files for the first two sets of pseudoterminals,
pty0
andpty1
. Thepty0
pseudoterminals have corresponding device special files named/dev/ttyp0
through/dev/ttypf
. Thepty1
pseudoterminals have corresponding device special files named/dev/ttyq0
through/dev/ttyqf
.
If you add pseudoterminals to your system, the
pty#
variable must be higher than
pty1
because the installation software sets
pty0
and
pty1
.
For example, to create device special files for a third set
of pseudoterminals, enter:
#
./MAKEDEV pty2
The
MAKEDEV
command
lists the device special files it has created.
For example:
MAKEDEV: special file(s) for pty2: ptyr0 ttyr0 ptyr1 ttyr1 ptyr2 ttyr2 ptyr3 ttyr3 ptyr4 ttyr4 ptyr5 ttyr5 ptyr6 ttyr6 ptyr7 ttyr7 ptyr8 ttyr8 ptyr9 ttyr9 ptyra ttyra ptyrb ttyrb ptyrc ttyrc ptyrd ttyrd ptyre ttyre ptyrf ttyrf
If
you want to allow root logins on all pseudoterminals, make sure an entry for
ptys
is present in the
/etc/securettys
file.
If you do not want to allow root logins on pseudoterminals, delete the entry
for
ptys
from the
/etc/securettys
file.
For example, to add the entries for the new
tty
lines and
to allow root login on all pseudoterminals, enter the following lines in the
/etc/securettys
file:
/dev/tty08 # direct tty
/dev/tty09 # direct tty
/dev/tty10 # direct tty
/dev/tty11 # direct tty
ptys
Refer to the
securettys
(4)
reference
page for more information.
When
you add new SCSI devices to your system, they are automatically detected and
configured by the Hardware Manager
hwmgr
and the Device
Special File Manager
dsfmgr
.
However, you may want to manually
create device names for other devices using
/dev/MAKEDEV
.
For example, you may also need to recreate device special files that were
incorrectly deleted from the system.
For new devices, you must physically connect the devices and then make the devices known to the system. There are two methods, one for static drivers and another for loadable drivers. You will need the documentation that came with your system processor and any documentation that came with the device itself. You may also require a disk containing the driver software.
Appendix D provides an outline example of adding a PCMCIA modem to a system, and shows you how to create the device special files.
Note that it is not necessary to use
/dev/MAKEDEV
if you simply want to create legacy
rz
or
tz
device special files in
/dev
such as
/dev/rz5
.
The
dsfmgr
utility provides a method of creating
these device names.
To add a device for a loadable
driver, see the device driver documentation.
To add a device for a static driver, see Section 5.6.4.1.
Next, make the device special files for the device, by following these steps:
Change to the
/dev
directory.
Create the device special files by using the
MAKEDEV
command.
Use the following syntax to invoke the
MAKEDEV
command:
./MAKEDEV
device#
The
device
variable is the device
mnemonic for the drive you are adding.
Appendix B
lists
the device mnemonics for all supported disk and tape drives.
The number sign
( #
) is the number of the device.
For example,
to create the device special files for two PCMCIA modem cards, use the following
command:
#
./MAKEDEV ace2 ace3
MAKEDEV: special file(s) for ace2: tty02 MAKEDEV: special file(s) for ace3: tty03
The generated special files should look like this:
crw-rw-rw- 1 root system 35, 2 Oct 27 14:02 tty02 crw-rw-rw- 1 root system 35, 3 Oct 27 14:02 tty03
Stop system activity by using the
shutdown
command and then turn off the processor.
Refer to
Chapter 2
for more information.
Power up the machine. To ensure that all the devices are seen by the system, power up the peripherals before powering up the system box.
Boot the system with the new kernel. Refer to Chapter 2 for information on booting your processor.
The preceding sections described generic hardware management tools that
are used to manage many aspects of all devices, such as the
hwmgr
utility described in
Section 5.4.
The following
sections describe hardware management tools that are targeted at a particular
kind of device and perform specific task.
The topics covered in these sections
are:
Finding device utilities
Using SCSI utilities
Disk partitioning
Copying and Cloning disks
Monitoring disks
5.7.1 Finding Device Utilities
Many of the device utilities are documented elsewhere
in this guide or in other volumes of the documentation set.
For example, utilities
that enable you to configure network devices are documented in detail in the
Network Administration
guide.
Table 5-4
provides references to utilities documented
in the guides, including those listed in this chapter.
Other utilities are
documented only in reference pages.
Table 5-5
provides
references to utilities documented in the reference pages and also provides
pointers to reference data such as the Section 7 interface reference pages.
Table 5-4: Device Utilities Documented in the Guides
Device | Task | Location |
Processor | Starting or stopping | Chapter 2 |
Sharing resources | Chapter 3, Class Scheduler. | |
Monitoring | Chapter 3 and Chapter 12 (Environmental) | |
Power Management | Chapter 3,
dxpower . |
|
Testing memory | Chapter 12 | |
Error and Event handling | Chapter 12 and Chapter 13 | |
SCSI buses | Managing | Section 5.7.2.1,
scsimgr .
(Note that
hwmgr
supersedes this utility) |
Configuring | Section 5.7.2.2,
scu . |
|
Disks | Partitioning and Cloning | Section 5.7.3,
diskconfig |
Copying | Section 5.7.5,
dd |
|
Monitoring usage | Section 5.7.7,
df
and
du |
|
Power Management | Chapter 3 | |
File systems status | Chapter 6 | |
Testing and exercising | Chapter 12 | |
Tapes (and Disks) | Archiving | Chapter 9 |
Testing and exercising | Chapter 12 | |
Clock | Setting | Chapter 2 |
Modem | Configuring | Chapter 1 |
Table 5-5: Device Utilities Documented in the Reference Pages
Device | Task | Location |
Devices (General) | Configuring |
hwmgr (8),
devswmgr (8),
dsfmgr (8). |
Device Special Files |
kmknod (8)
,
mknod (8),
MAKEDEV (8),
dsfmgr (8). |
|
Interfaces |
atapi_ide (7),
devio (7),
emx (7). |
|
Processor | Starting/Stopping |
halt (8),
psradm (8),
reboot (2). |
Allocating CPU resources |
class_scheduling (4),
processor_sets (4),
runon (1). |
|
Monitoring |
dxsysinfo (8),
psrinfo (1). |
|
SCSI buses | Managing |
sys_attrs_cam (5),
ddr.dbase (4)
ddr_config (8). |
Disks | Partitioning |
diskconfig (8),
disklabel (4),
disklabel (8),
disktab (4). |
Monitoring |
dxsysinfo (8)
,
diskusg (8),
acctdisk (8),
df (1),
du (1),
quota (1). |
|
Testing and Maintenance |
diskx (8),
zeero (8). |
|
Interfaces |
ra (7),
radisk (8),
ri (7),
rz (7). |
|
Swap Space |
swapon (8). |
|
Tapes (and Disks) | Archiving |
bttape (8),
dxarchiver (8),
rmt (8). |
Testing and Maintenance |
tapex (8). |
|
Interfaces |
tz (7),
mtio (7),
tms (7). |
|
Floppy | Tools |
dxmtools (1),
mtools (1). |
Testing and Maintenance |
fddisk (8). |
|
Interfaces |
fd (7).
|
|
Terminals, Ports | Interfaces |
ports (7). |
Modem | Configuring |
chat (8).
|
Interfaces |
modem (7). |
|
Keyboard, Mouse | Interfaces |
dc (7),
scc (7). |
See
Appendix A
for a list of the utilities provided
by SysMan.
5.7.2 SCSI and Device Driver Utilities
The following sections describe utilities that you use to manage SCSI
devices and device drivers.
5.7.2.1 Using the SCSI Device Database Manager, scsimgr
The
scsimgr
utility is used to manage entries for SCSI devices in the
/etc/dec_scsi_db
database.
This is a binary database that stores
the logical identification assignments for SCSI devices, and preserves these
identifications across system reboots.
Most of the business of managing
SCSI devices is managed automatically by the system.
For example, you can
add a new SCSI device (such as a disk) to a system and the system will detect
the device on reboot, create database entries and create the device special
files in
/dev
.
Entries in the
/etc/dec_scsi_db
database are used to translate from a logical identifier (ID) of
a device to a physical address.
This information ensures that once a device
is associated with a device identifier, it retains that identifier on the
next reboot.
Note
You can now use
hwmgr
to perform allscsimgr
operations. Thescsimgr
utility will be retired in a future release of the operating system.
5.7.2.2 Using the SCSI Configuration Utility, scu
The SCSI/CAM Utility Program,
scu
, provides commands
necessary for normal maintenance and diagnostics of SCSI peripheral devices
and the CAM I/O subsystem.
The
scu
program has an extensive
help feature which describes utility's commands and conventions.
Refer also
to the
scu
(8)
reference page for detailed information on using this command.
You can use
scu
to:
Format disks
Reassign a defective disk block
Reserve and release a device
Display and set device and program parameters
Enable and disable a device
DSA Disks
For Digital Storage Architecture (DSA) disks, use the
radisk
program. See theradisk
(8) reference page for information.
Examples of
scu
usage are:
#
scu
scu>
set nexus bus 0 target 0 lun 0
Device: RZ1CB-CA, Bus: 0, Target: 0, Lun: 0, Type: Direct Accessscu>
show capacity
Disk Capacity Information: Maximum Capacity: 8380080 (4091.836 megabytes) Block Length: 512scu>
show scsi status 0
SCSI Status = 0 = SCSI_STAT_GOOD = Command successfully completed
5.7.2.3 Using the Device Switch Manager, devswmgr
The
devswmgr
command enables you to you manage the device switch table
by displaying information about the device drivers in the table.
You can
also use the command to release device switch table entries.
Typically, you
release the entries for a driver after you have unloaded the driver and do
not plan to reload it later.
Releasing the entries frees them for use by
other device drivers.
Examples of
devswmgr
usage for device data are:
#
devswmgr -display
device switch database read from primary file device switch table has 200 entries#
devswmgr -getnum
Device switch reservation list (*=entry in use) driver name instance major ----------------------- -------- ----- pfm 1 71* fdi 2 58* xcr 2 57 kevm 1 56* cam_disk 2 55* emx 1 54 TMSCP 2 53 MSCP 2 52 xcr 1 44 LSM 4 43 LSM 3 42 LSM 2 41* LSM 1 40* ace 1 35* parallel_port 1 34* cam_uagt 1 30 MSCP 1 28 TMSCP 1 27 scc 1 24 presto 1 22 cluster 2 21* cluster 1 19* fdi 1 14* cam_tape 1 9 cam_disk 1 8* pty 2 7 pty 1 6 tty 1 1 console 1 0
5.7.3 Partitioning Disks Using diskconfig
The Disk Configuration graphical user interface (diskconfig
) enables you to perform the following tasks:
Display attribute information for existing disks
Modify disk configuration attributes
Administer disk partitions
See the
diskconfig
(8)
reference page for information on
invoking the Disk Configuration utility (diskconfig
).
An
online help volume describes how you use the graphical interface.
See the
disklabel
(8)
reference page for information on command options.
The Disk Configuration utility provides a graphical interface to several disk maintenance tasks that can also be done manually, using the following commands:
disklabel
-
This command can be used to install, examine, or modify the label on a disk
drive or pack.
The disk label contains information about the disk, such as
type, physical parameters, and partitioning.
See also the
/etc/disktab
file, described in the
disklabel
(4)
reference page.
newfs
- This command creates a new UFS file system on the specified
device.
The
newfs
command cannot be used to create Advanced
File System (AdvFS) domains.
Instead, use the
mkfdmn
command,
as described in
the
mkfdmn
(8)
reference page.
mkfdmn
and
mkfset
- These commands are
used to create Advanced File System (AdvFS) domains and filesets.
An example of using manual methods is provided in Section 5.7.4.
The Disk Configuration interface can be invoked as follows:
At the system prompt, type
diskconfig
.
From the CDE Front Panel, SysMan Applications pop-up menu, choose Configuration. Then select the Disk icon from the SysMan Configuration folder.
Caution
The Disk Configuration will display appropriate warnings when you attempt to change partition sizes. However, you should plan the changes in advance to ensure that you do not overwrite any required data. Back up any data partitions before attempting this task.
A window titled Disk Configuration on hostname will be displayed. This is the main window for DiskConfig, and lists the following information for each disk:
The disk basename, such as
dsk10
.
See
Section 5.5
for information on disk names.
The device model, such as
RZ1CB-CA
The physical location of the device, specifying Bus, Target and LUN (logical unit number). See Section 5.4 for information on the device location.
Select a device by double-clicking on the list item (or press configure when a disk is highlighted) . The following windows will be displayed:
This window provides the following information and options:
A graphical representation of the disk partitions, in a horizontal bar-chart format. The currently-highlighted partition is a different color, and the details of that partition are displayed in the Selected Partition box. You can use the bar chart handles (or flags) to change the partition sizes. Position the cursor as follows:
On the center handle to change both adjacent partitions
On the top flag to move up the start of the right-hand partition
On the bottom flag to move down the end of the left-hand partition
Press MB1 and drag the mouse to move the handles.
A pull-down menu that enables you to toggle the sizing information between megabytes, bytes and blocks.
A statistics box, that displays disk information such as the device name, the total size of the disk and usage information. This box enables you to assign or edit the disk label, and create an alias name for the device.
The Selected Partition box, which displays dynamic sizes for the selected partition. These sizes are updated as you change the partitions using the bar-chart. You can also type the partition sizes directly into these windows to override the current settings. This box also enables you to select the file system for the partition and, if using AdvFS, the domain name and fileset name.
The Disk Attributes... option.
This button displays some of the physical attributes of the device.
The Partition Table... option, which is described in the following item.
This window displays a bar-chart of the current partitions in use, their sizes, and the file system in use. You can toggle between the current partition sizes, the default table for this device and the original (starting table) when this session was started. If you make errors on a manual partition change, you can use this window to reset the partition table.
Refer to the online help for more information on these windows.
After making partition adjustments, use the SysMan Menu options to mount any newly created file systems as follows:
Invoke the SysMan Menu, as described in Chapter 1
Expand the Storage options, and select Basic File System Utilities - Mount File Systems
In the Mount Operation window, select the option to mount a specific file system and press Next
In the Name and Mount Point window:
Type a mount point, such as
/usr/newusers
Type the partition name, such as
/dev/disk/dsk0g
or a domain name, such as
newusr_domain#usr
.
Your new file system is now accessible.
5.7.4 Manually Partitioning Disks
This section provides the information you need to change the partition scheme of your disks. In general, you allocate disk space during the initial installation or when adding disks to your configuration. Usually, you do not have to alter partitions; however, there are cases when it is necessary to change the partitions on your disks to accommodate changes and to improve system performance.
The disk label provides detailed information about the geometry of the
disk and the partitions into which the disk is divided.
You can change the
label with the
disklabel
command.
You must be the root
user to use the
disklabel
command.
There are two copies of a disk label, one located on the disk and one
located in system memory.
Because it is faster to access system memory than
to perform I/O, when the system boots, it copies the disk label into memory.
Use the
disklabel
-r
command
to directly access the label on the disk instead of going through the in-memory
label.
Note
Before you change disk partitions, back up all the file systems if there is any data on the disk. Changing a partition overwrites the data on the old file system, destroying the data.
When changing partitions, remember that:
You cannot change the offset, which is the beginning sector, or shrink any partition on a mounted file system or on a file system that has an open file descriptor.
If you need only one partition on the entire disk, use partition
c
.
Unless it is mounted, you must specify the raw device for
partition
a
, which begins at the start of the disk (sector
0), when you change the label.
If partition
a
is mounted,
you must then use partition
c
to change the label.
Note
that partition
c
also must begin at sector 0.
Caution
If partition
a
is mounted and you attempt to edit the disk label using device partitiona
, you will not be able to change the label. Furthermore, you will not receive an error message that would indicate that the label was not written.
Before changing the size of a disk partition, review
the current partition setup by viewing the disk label.
The
disklabel
command allows you to view the partition sizes.
The bottom, top,
and size of the partitions are in 512-byte sectors.
To review the current disk partition setup, use the following
disklabel
command syntax:
disklabel
-r device
Specify the device with its directory name
(/dev)
followed by the raw device name, drive number, and partition
a
or
c
.
You can also specify the disk unit and number, such
as
dsk1
.
An example of using the
disklabel
command to view
a disk label follows:
#
disklabel -r /dev/rdisk/dsk3a
type: SCSI disk: rz26 label: flags: bytes/sector: 512 sectors/track: 57 tracks/cylinder: 14 sectors/cylinder: 798 cylinders: 2570 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0
8 partitions: # size offset fstype [fsize bsize cpg] a: 131072 0 4.2BSD 1024 8192 16 # (Cyl. 0 - 164*) b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*) h: 838444 1212416 4.2BSD 1024 8192 16 # (Cyl. 1519*- 2569*)
You must be careful when you change
partitions because you can overwrite data on the file systems or make the
system inefficient.
If the partition label becomes corrupted while you are
changing the partition sizes, you can return to the default partition label
by using the
disklabel
command with the
-w
option, as follows:
#
disklabel -r -w /dev/rdisk/dsk1a rz26
The
disklabel
command allows you to change the partition label of an individual
disk without rebuilding the kernel and rebooting the system.
Use the following
procedure:
Display disk space information about the file systems by using
the
df
command.
View the
/etc/fstab
file to determine if
any file systems are being used as swap space.
Examine the disk's label by using the
disklabel
command with the
-r
option.
Refer to the
rz
(7)
and
ra
(7)
reference
pages and to the
/etc/disktab
file for information on the
default disk partitions.
Back up the file systems.
Unmount the file systems on the disk whose label you want to change.
Calculate the new partition parameters. You can increase or decrease the size of a partition. You can also cause partitions to overlap.
Edit the disk label by using the
disklabel
command with the
-e
option to
change the partition parameters, as follows:
disklabel
-e
[-r
]
disk
An editor, either the
vi
editor or that specified by the EDITOR environment variable,
is invoked so you can edit the disk label, which is in the format displayed
with the
disklabel
-r
command.
The
-r
option writes the label directly to
the disk and updates the system's in-memory copy, if possible.
The
disk
parameter specifies the unmounted disk (for example,
dsk0
or
/dev/rdisk/dsk0a
).
After you quit the editor and save the changes, the following prompt is displayed:
write new label? [?]:
Enter
y
to write the new label or
n
to discard the
changes.
Use the
disklabel
command with the
-r
option to view the new disk label.
5.7.4.1 Checking for Overlapping Partitions
Commands to mount or create file systems, add a new
swap device, and add disks to the Logical Storage Manager first check whether
the disk partition specified in the command already contains valid data, and
whether it overlaps with a partition that is already marked for use.
The
fstype
field of the disk label is used to determine when a partition
or an overlapping partition is in use.
If the partition is not in use, the command continues to execute.
In
addition to mounting or creating file systems, commands like
mount
,
newfs
,
fsck
,
voldisk
,
mkfdmn
,
rmfdmn
, and
swapon
also modify the disk label, so that the
fstype
field specifies how the partition is being used.
For example, when you add
a disk partition to an AdvFS domain, the
fstype
field is
set to
AdvFS
.
If the partition is not available, these commands return an error message and ask if you want to continue, as shown in the following example:
#
newfs /dev/disk/dsk8c
WARNING: disklabel reports that basename,partition currently is being used as "4.2BSD" data. Do you want to continue with the operation and possibly destroy existing data? (y/n) [n]
Applications, as well as operating system commands, can modify the
fstype
of the disk label, to indicate that a partition is in use.
See the
check_usage
(3)
and
set_usage
(3)
reference pages for more information.
5.7.5 Copying Disks
You can use the
dd
command to copy a complete disk
or a disk partition; that is, you can produce a physical copy of the data
on the disk or disk partition.
Note
Because the
dd
command was not meant for copying multiple files, you should copy a disk or a partition only on a disk that is used as a data disk or one that does not contain a file system. Use thedump
andrestore
commands, as described in Chapter 9, to copy disks or partitions that contain a UFS file system. Use thevdump
andvrestore
commands, as described in AdvFS Administration, to copy disks or partitions that contain an AdvFS fileset.
UNIX protects the first block of a disk with a valid disk label because this is where the disk label is stored. As a result, if you copy a partition to a partition on a target disk that contains a valid disk label, you must decide whether you want to keep the existing disk label on that target disk.
If you want to maintain the disk label on the target disk, use the
dd
command with the
skip
and
seek
options to move past the protected disk label area on the target disk.
Note
that the target disk must be the same size as or larger than the original
disk.
To determine if the target disk has a label, use the
following
disklabel
command syntax:
disklabel
-r
target_device
You must specify the target device directory name
(/dev)
followed by the raw device name, drive number, and partition
c
.
If the disk does not contain a label, the following message is displayed:
Bad pack magic number (label is damaged, or pack is unlabeled)
The following example shows a disk that already contains a label:
#
disklabel -r /dev/rdisk/dsk1c
type: SCSI disk: rz26 label: flags: bytes/sector: 512 sectors/track: 57 tracks/cylinder: 14 sectors/cylinder: 798 cylinders: 2570 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0
8 partitions: # size offset fstype [fsize bsize cpg] a: 131072 0 unused 1024 8192 # (Cyl. 0 - 164*) b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*) h: 838444 1212416 unused 1024 8192 # (Cyl. 1519*- 2569*)
If the target disk already contains a label and you do not want to keep
the label, you must clear the label by using the
disklabel
-z
command.
For example:
#
disklabel -z /dev/rdisk/dsk1c
To copy the original disk to the target disk and keep the target disk
label, use the following
dd
command syntax:
dd
if=original_disk
of=target_disk
skip=16 seek=16 bs=block_size
Specify the device directory name
(/dev)
followed
by the raw device name, drive number, and the original and target disk partitions.
For example:
#
dd if=/dev/rdisk/dsk0c of=/dev/rdisk/dsk1c \ skip=16 seek=16 bs=512k
This section suggests a procedure that can be used to clone a system disk. For example, you could move your system disk from a small disk to one with larger capacity without reinstalling the operating system. Cloning involves recreating the entire file system of one disk (target) on a new disk (clone). Note that this is not presented as a definitive method, and your local system may require additional steps. The operation is best undertaken while in single-user mode.
The process assumes that you have installed the new disk as described in the hardware documentation supplied with the disk.
Identify the device special files for the source and target
disks (dev/disk/dskNx
).
Use
dsfmgr
or
hwmgr
to identify and check disk characteristics.
See
Section 5.4
for information on using
hwmgr
and
Section 5.5
for information on using
dsfmgr
.
Examine and copy the
/etc/fstab
file.
This file describes the partitions and file systems you will need to clone.
Examine and copy the
/etc/sysconfigtab
file, which lists the swap partitions that you will need to re-create on the
target disk.
See
Chapter 3
and the
swapon
(8)
reference page.
Use
diskconfig
as described in
Section 5.7.3
to label and partition a target disk to receive the clone copy.
The size
of partitions may differ, but the layout and file system information must
be identical to the source disk.
For cloning a boot disk, you must write
a boot block to the target disk.
It is possible to change partition layouts if you do not want all source partitions, but you will need to modify the target fstab file.
If you have AdvFS domains complete this step. Otherwise, go to step 6.
Create domains for
/
,
usr
and
var
ensuring that the partitions are of equal or greater size.
The following example assumes that the
/var
file system
exists in
/usr
:
#
mkfdmn /dev/disk/dsk1a root_tmp
#
mkfdmn /dev/disk/dsk1g usr_tmp
#
mkfset root_tmp root
#
mkfset usr_tmp usr
#
mkfset usr_tmp var
#
mkdir /clone
#
mount root_tmp#root /clone
#
vdump -0 -f - / (cd /clone ; vrestore -x -f -)
#
mount usr_tmp#usr /clone/usr
#
vdump -0 -f - /usr (cd /clone/usr ; vrestore -x -f -)
#
mount usr_tmp#var /clone/var
#
vdump -0 -f - /var (cd /clone/var ; vrestore -x -f -)
Next, correct the links in
etc/fdmns
.
The copied version will be pointing to the original device special file.
Change these links to point to the device special files for the newly created
domains.
For example:
#
cd /clone/etc/fdmns/root_domain
#
rm -r *
#
ln -s /dev/disk/dsk1a .
#
cd /clone/etc/fdmns/usr_domain
#
rm -r *
#
ln -s /dev/disk/dsk1g .
If UFS is not in use on the source disk, go to step 7
If you have UFS file systems on the source disk, complete this step. Otherwise go to step 7.
Create a
/clone
mount point and mount the UFS partition
(for example,
a
) of the target disk on
/clone
, as shown in the following example:
#
mount /dev/disk/dsk1a /clone
Next, dump the partition as follows:
#
dump -0u -f - /dev/disk/disk0a | \ (cd /clone ; restore -r -f -)
Verify file ownerships and that all required file system branches
were dumped.
The following
diff
command sequence will
help you do this and provide a record of the dump:
#
ls -R -l /clone > /newfiles
#
cd /
#
umount /clone
#
ls -R -l > /newfiles
#
diff /newfiles /oldfiles > files.diff
If differences occur, remount the source and correct them.
You can edit the
files.diff
file to create a script that
you run to correct errors.
If you used this process to create a bootable clone disk,
examine the
/etc/fstab
file before booting off the new
disk.
Make any necessary changes to partition mounts.
Similarly, make any
changes to swap in /etc/sysconfigtab
.
To test the clone, shut down and halt the system, then reboot specifying the new boot disk as follows:
>>>
show devices
Determine the SCSI address of the target, and its configuration device name, such as DKxNNN.
Boot from the cloned disk as follows:
>>>
boot DKA200
If the boot is successful, and all system features appear
to be functioning correctly, you can permanently swap the source and target
disks by changing the appropriate console environment variables, physically
swapping the devices, or using
hwmgr
.
The bootable tape utility described in
Chapter 9
provides a method of creating a bootable standalone kernel on a magnetic tape.
This method may enable faster recovery if you have problems with the root
disk.
Consider also some of the features offered by the Logical Storage Manager
(LSM) that enable you to create a disk mirror as a copy of the root disk.
5.7.7 Monitoring Disk Use
To ensure an adequate amount of free disk space, you should regularly monitor the disk use of your configured file systems. You can do this in any of the following ways:
Check available free space by using the
df
command
Check disk use by using the
du
command
or the
quot
command
Verify disk quotas (if imposed) by using the
quota
command
You can use the
quota
command only if you are the
root user.
5.7.7.1 Checking Available Free Space
To ensure sufficient space for your configured
file systems, you should regularly use the
df
command to
check the amount of free disk space in all of the mounted file systems.
The
df
command displays statistics about the amount of free disk space
on a specified file system or on a file system that contains a specified file.
The
df
command has the following syntax:
df
[- eiknPt
]
[- F
]
fstype
...
| file
| file_system
With no arguments or options, the
df
command
displays the amount of free disk space on all of the mounted file systems.
For each file system, the
df
command reports the file
system's configured size in 512-byte blocks, unless you specify the
-k
option, which reports the size in kilobyte blocks.
The
command displays the total amount of space, the amount presently used, the
amount presently available (free), the percentage used, and the directory
on which the file system is mounted.
For AdvFS file domains, the
df
command displays disk
space usage information for each fileset.
If you specify a device that has no file systems mounted on it,
df
displays the information for the root file system.
You can specify a file path name to display the amount of available disk space on the file system that contains the file.
Refer to the
df
(1)
reference page for more information.
Note
You cannot use the
df
command with the block or character special device name to find free space on an unmounted file system. Instead, use thedumpfs
command.
The following example displays disk space information about all the mounted file systems:
#
/sbin/df
Filesystem 512-blks used avail capacity Mounted on /dev/disk/dsk2a 30686 21438 6178 77% / /dev/disk/dsk0g 549328 378778 115616 76% /usr /dev/disk/dsk2g 101372 5376 85858 5% /var /dev/disk/dsk3c 394796 12 355304 0% /usr/users /usr/share/mn@tsts 557614 449234 52620 89% /usr/share/mn domain#usr 838432 680320 158112 81% /usr
Note
The
newfs
command reserves a percentage of the file system disk space for allocation and block layout. This can cause thedf
command to report that a file system is using more than 100 percent of its capacity. You can change this percentage by using thetunefs
command with the-minfree
flag.
If
you determine that a file system has insufficient space available, check how
its space is being used.
You can do this with the
du
command
or the
quot
command.
The
du
command pinpoints disk space allocation by
directory.
With this information you can decide who is using the most space
and who should free up disk space.
The
du
command has the following syntax:
/usr/bin/du
[-aklrsx
]
[
directory
...
| filename
...
]
The
du
command displays the number of blocks contained
in all directories (listed recursively) within each specified directory, file
name, or (if none are specified) the current working directory.
The block
count includes the indirect blocks of each file in 1-kilobyte units, independent
of the cluster size used by the system.
If you do not specify any options, an entry is generated only for each
directory.
Refer to the
du
(1)
reference page for more information on command
options.
The following example displays a summary of blocks that all main subdirectories
in the
/usr/users
directory use:
#
/usr/bin/du -s /usr/users/*
440 /usr/users/barnam 43 /usr/users/broland 747 /usr/users/frome 6804 /usr/users/morse 11183 /usr/users/rubin 2274 /usr/users/somer
From this information, you can determine that user rubin is using the most disk space.
The following example displays the space that each file and subdirectory
in the
/usr/users/rubin/online
directory uses:
#
/usr/bin/du -a /usr/users/rubin/online
1 /usr/users/rubin/online/inof/license 2 /usr/users/rubin/online/inof 7 /usr/users/rubin/online/TOC_ft1 16 /usr/users/rubin/online/build . . . 251 /usr/users/rubin/online
Note
As an alternative to the
du
command, you can use thels -s
command to obtain the size and usage of files. Do not use thels -l
command to obtain usage information;ls -l
displays only file sizes.
You can use the
quot
command to list the number of
blocks in the named file system currently owned by each user.
You must be root user to use the
quot
command.
The
quot
command has the following syntax:
/usr/sbin/quot
[-c
]
[-f
]
[-n
]
[file_system
]
The following example displays the number of blocks used by each user
and the number of files owned by each user in the
/dev/disk/dsk0h
file system:
#
/usr/sbin/quot -f /dev/disk/dsk0h
Note
The character device special file must be used to return the information, because when the device is mounted the block special device file is busy.
Refer to the
quot
(8)
reference page for more information.