This chapter describes the commands and utilities available to assist you in administering the system hardware components. The utilities work on single systems and on systems joined into clusters. Hardware management involves viewing the status of system components and performing administrative options on them. This includes adding and removing components, troubleshooting components that are not working, and monitoring components to prevent problems before they occur.
You might also need to administer the software that is associated with components, such as drivers, kernel pseudodevices, and device special files. This software enables a component to communicate and transfer data between different parts of the system. Information on administering the related software components is included in this chapter.
Most operations require root user privileges but you can assign such
privileges to nonroot users by using the SysMan division of privileges
(DOP) feature.
See
dop
(8)
for more information.
This chapter contains the following sections:
Section 5.1 provides a conceptual overview of hardware management and relates it to the organization of information in this chapter.
Section 5.2 lists other documentation resources that apply to hardware management, including reference pages for commands and utilities. It also identifies key system files and provides pointers to utilities that are associated with hardware administration.
Section 5.3 describes the SysMan hardware management options.
Section 5.4 describes the hardware manager command. This command provides full access to hardware management options.
Section 5.5
describes
how to use the
dsfmgr
command to manage device special
files.
Section 5.6 describes how to manually add components that you cannot add by using hardware manager, and how you create pseudodevices.
Section 5.7 describes targeted utilities for managing hardware.
The
hwmgr
command also enables you to hot swap CPUs.
For information on this feature, see
hwmgr_ops
(8)
and the
Managing Online Addition and Removal
guide.
5.1 Understanding Hardware
A hardware component is any discrete part of the system such as a CPU, a networking card, or a hard disk. The system is organized in a hierarchy with the CPUs at the top and peripheral components such as disks and tapes, at the bottom. This is sometimes also referred to as the system topology. The following components are typical of the device hierarchy of most computer systems, although it is not a definitive list:
The central processing unit (CPU), which might be a single processor system, a multiprocessor system, or a set of processors joined into a cluster. The system is sometimes referred to as a host in the context of hardware management and has a designated host name and perhaps also a host address if the system is on a network. You often specify commands using the host name. The CPUs are the top of the system hardware hierarchy, and all other system components are organized under the CPUs.
Typical administrative tasks associated with the CPU are many, such as bringing CPUs online, starting and stopping them, or sharing CPU resources. These tasks are documented throughout this guide, such as Chapter 2, which documents the options for shutting down the system.
Buses - A system might have a number of main internal communication buses, which transfer data between components of the system. Adapters and controllers are physically plugged into buses and have both physical and logical addresses.
Buses might have special software associated with the physical bus, but that software is usually managed within the context of the UNIX operating system. For example, when adding an option card such as a sound or network card to a PCI bus, you have to shut down the system, add the hardware, and reboot. Such components are usually automatically recognized and added to the system configuration on reboot, but you might need to run a firmware utility to install a driver for the device. Always consult your system documentation and the documentation that comes with the card for information on adding such components.
Controllers and Adapters -
A system might have a number of controllers such as SCSI controllers, which
control one or more storage devices.
There might be other controllers, such
as the floppy disk interface (fdi
) that support one kind
of disk and usually have only one physical disk attached to the controller.
A network adapter might be connected to a bus, but does not have any other
components below it in the hierarchy other than the network cabling.
Adapters occupy a physical slot on a bus, which gives them both a logical address and a physical location to administer. They might also provide slots for other components, which also have physical and logical addresses.
Storage devices , such as SCSI disks or CDROM readers, are among the lowest entities in the system hierarchy. They are typically attached to a controller or adapter, and often have both a physical location and a logical address to administer.
Storage (and other) devices might be shared by components or members of a cluster. This means that a component might have different names and identifiers associated with it depending how you access the component. Understanding how to identify a component, and how that component appears to the rest of the hierarchy, is an important aspect of hardware management. You often need to know both logical and physical locations of components.
When referring to SCSI devices in this chapter, the SCSI disk is most frequently referenced as an example. It is often the target of hardware management tasks and might appear to the system as a single device, or as a group or array. For example:
RZ
devices
The operating system supports storage devices that conform to
the Small Computer System Interface (SCSI) interface technology.
Not all SCSI
devices closely conform to this standard and the system might not be automatically
detect and add such devices.
You might need to use
ddr_config
as described in
Section 5.6
to add such devices
HSG
and
HSZ
devicesThe Redundant Array of Inexpensive Disks (RAID) technology. These are storage boxes that contain several connected SCSI disks, appearing to the system as a single device. They might support features such as hot-swapping, failover, and redundancy, and be connected to the system by fibre channel controllers. Such storage arrays can be shared between many systems in a storage area network.
You use applications such as the Storage Works Console (SWCC) to manage
storage arrays and storage area networks.
In such configurations, you can
accomplish only a small proportion of your storage management tasks using
features of the operating system, such as the
hwmgr
command.
Consult your StorageWorks documentation for complete information on how you
configure and manage storage arrays.
See
RAID
(7),
SCSI
(7)
and
rz
(7)
for more information
on device characteristics.
See
tz
(7)
for more information on tape devices.
See the
Technical Overview
and the
Software Product Description
for the current supported standards for RAID and SCSI.
Hardware management involves understanding how all the components relate to each other, how they are logically and physically located in the system topology, and how the system software recognizes and communicates with components. To better understand the component hierarchy of a system, refer to Chapter 1 for an introduction to the SysMan Station. This is a graphical user interface that displays topological views of the system component hierarchy and allows you to manipulate such views.
The majority of hardware management tasks are automated. When you add a supported SCSI disk to a system and reboot the system, the disk is automatically detected and configured into the system. The operating system dynamically loads required drivers and creates the device special files. You need only to partition the disk and create file systems on the partitions (described in Chapter 6) before you use it to store data. However, you must periodically perform some hardware management tasks manually, such as when a disk crashes and you need to bring a replacement disk online at the same logical location. You might also need to manually add components to a running system or redirect I/O from one disk to another disk. This chapter focuses on these manual tasks.
Many other hardware management tasks are part of regular system operations and maintenance, such as repartitioning a disk or adding an adapter to a bus. Often, such tasks are fully described in the hardware documentation that accompanies the component itself, but you often need to perform tasks such as checking the system for the optimum (or preferred) physical and logical locations for the new component.
Another important aspect of hardware management is preventative maintenance and monitoring. Use the following operating system features to maintain a healthy system environment:
The Event Manager (EVM) - An event logging system that filters system events and then notifies you of selected events. It includes sophisticated features for warning you of problems by electronic mail or a pager. Refer to Chapter 13 for information on configuring EVM.
The SysMan Station - A graphical user interface that enables you to view and monitor the entire system (or cluster) hardware and launch applications to perform administrative tasks on components. You can also launch these applications from the SysMan Menu, and some example applications are described later in this chapter (see Section 5.3). For information on using the SysMan tasks, refer to Chapter 1.
The system
census tool,
sys_check
- This command provides you
with data on your system's current configuration as an HTML document that
you can read with a Web browser.
You can use the data as a system baseline,
perform tuning tasks, and check all log files.
The Storage configuration section
provides information on storage devices and file systems.
Refer to
Chapter 3
for information on running this utility, and on configuring it to run regularly.
Insight Manager - An enterprise-wide,
Web-based management tool that enables you to view system and component status
anywhere in your local area network.
It includes launch points for the SysMan Station,
the SysMan Menu, and the system census utility,
sys_check
.
See
insight_manager
(5)
for more information.
The organization of this chapter reflects the hardware and software components that you manage as follows:
Generic hardware management tools - These tools enable you to perform operations on all components of a type, classes of component such as SCSI tapes, or individual components. The tools might in some cases operate on all systems in a cluster. An example of such a tool is the SysMan Station, which provides you with a graphical display of the entire component hierarchy for all members of a cluster.
Software management - This involves the administration of the software that is associated with hardware components on the system, principally managing the device special files. These are the files associated with a hardware component that enable any application to access its device driver or pseudodriver.
Targeted hardware management tools - These tools enable
you to perform operations that are targeted to a specific component and perform
a specific task.
An example is the disk configuration command line interface,
disklabel
, and the analogous graphical user interface, Disk Configuration
(diskconfig
), which enable you to partition a disk by using
the standard layouts or your own custom layouts.
Another way to think of this is that with a generic tool you can perform
a task on many components, while with a targeted tool you can perform a task
on only a single component.
Unless stated, most operations are specific to
a single system or to a cluster.
See the TruCluster documentation for
additional information on managing cluster hardware.
5.2 Reference Information
The following sections contain reference information related to documentation,
system files, related software tools.
Some tools described here are obsolete
and scheduled for removal in a future release.
Consult the
Release Notes
for
a list of operating system features that are scheduled for retirement and
migrate to its replacement as soon as possible.
Check your site-specific shell
scripts for any calls that might invoke an obsolete command.
5.2.1 Related Documentation
The following documentation contains information hardware management:
Guides (available online or hardcopy):
Device documentation - Consult the device documentation for information on installing the device and for any required operating system or configuration settings.
Network Administration: Connections and Network Administration: Services - Provide information on configuring or connecting network components.
Device Driver Documentation Kit - Contains related documents such as: Writing PCI Bus Device Drivers and Writing Device Drivers: Reference.
Logical Storage Manager - The Logical Storage Manager (LSM) consists of physical disk devices, logical entities, and the mappings that connect them. Refer to this document for information on LSM concepts and commands.
Reference pages:
hwmgr
(8)
- Summary information on the syntax and
usage of the hardware manager command,
/sbin/hwmgr
.
hwmgr_ops
(8)
- System operation options for the
/sbin/hwmgr
command.
Use these options to perform procedures such
as CPU hot swap.
hwmgr_show
(8)
- Hardware information options for the
/sbin/hwmgr
command.
Use these options to display information from
the hardware databases.
hwmgr_get
(8)
- Component attribute information options
for the
/sbin/hwmgr
command.
Use these options to obtain
and configure component attributes.
hwmgr_view
(8)
- Status information options for the
/sbin/hwmgr
command.
Use these options to view component and system
status.
dsfmgr
(8)
- Contains complete information on the
command syntax for the device special file management command.
Use this command
to create device special files in the
/dev
directory.
Refer
also to
Section 5.5.
mknod
(8),
MAKEDEV
(8),
scu
(8),
ddr_config
(8), and
devswmgr
(8)
-
Reference pages that cover miscellaneous commands and utilities that you might
use while administering devices.
The command line and graphical user interfaces also provide
extensive online help.
5.2.2 Identifying Hardware Management System Files
The following system files contain static or dynamic information that the system uses to configure the component into the kernel. Do not edit these files manually even if they are ASCII text files. Some files are context-dependent symbolic links (CDSLs), as described in Chapter 6. If the links are accidentally broken, clustered systems cannot access the files until the links are recreated.
Note
Although some hardware databases are text format, you must not edit the databases. Use only the appropriate command.
The
/dev
directory contains device special
files.
Refer to
Section 5.5
for more information.
/etc/ddr_dbase
- The device dynamic
recognition (DDR) device information database.
The content of this file is
compiled into the binary file/etc/ddr.db
, which the system
uses to obtain device information.
/etc/dec_devsw_db
- This is a binary
database owned by the kernel
dev
switch code.
This database
keeps track of the driver major numbers and driver switch entries.
/etc/disktab
- This file specifies
the disk geometry and partition layout tables.
This file is useful for identifying
disk device names and certain disk device attributes.
/etc/dvrdevtab
- This file specifies
the database name and the mapping of driver names to special file handlers.
/etc/gen_databases
- A text file
that contains the information required to convert a database name to a database
file location and a database handler.
/etc/dec_hw_db
- This is a binary
database that contains hardware persistence information.
Generally, this refers
to hardware such as buses or controllers.
/etc/dec_hwc_ldb
- This is a binary
database that contains information on hardware components that are local to
a cluster member.
/etc/dec_hwc_cdb
- This is a binary
database that contains information on hardware components that are shared
by all members of a cluster.
Hardware components with unique cluster names
or mapped to
dev_t
are stored in this database.
/etc/dec_scsi_db
- This is a binary
database owned by SCSI/CAM.
It stores the worldwide identifier (WWID) of SCSI
devices and enables CAM to track all SCSI devices that are known to the system.
/etc/dec_unid_db
- This is a binary
database that stores the highest hardware identifier (HWID) assigned to a
hardware component.
The operating system uses this database to generate the
next HWID that the system automatically assigns to a newly-installed hardware
component.
The system never reuses an HWID.
For example, assume you add a
disk to a system and it is assigned an HWID of 124.
Even if you remove that
disk permanently from the system, the HWID 124 is never reassigned to its
replacement disk or to any other device.
The only way that you can reset the
HWID numbering sequence is to perform a fresh installation of the operating
system.
5.2.3 WWIDs and Shared Devices
SCSI device naming is based on the logical identifier (ID) of a device. This means that the device special filename has no correlation to the physical location of a SCSI device. UNIX uses information from the device to create an identifier called a worldwide identifier, which is usually written as WWID.
Ideally, the WWID for a device is unique, enabling the identification of every SCSI device attached to the system. However, some legacy disks (and even some new disks available from third-party vendors) do not provide the information required to create a unique WWID for a specific device. For such devices, the operating system attempts to generate a WWID, and in the extreme case uses the device nexus (its SCSI bus/target/LUN) to create a WWID for the device.
Consequently, do not use devices that do not have a unique WWID on a shared bus. If a device that does not have a unique WWID is put on a shared bus, a different device special file is created for each different path to the device. This can lead to data corruption if the operating system uses two different device special files to access the same device at the same time. To determine if a device has a cluster-unique WWID, use the following command:
# hwmgr show components
If a device has the
c
flag set in
the
FLAGS
field, then it has a cluster-unique WWID and
you can place it on a shared bus.
Such devices are referred to as
"cluster-shareable"
because you can put them on a shared bus within
a cluster.
Note
Exceptions to this rule are HSZ devices. Although an HSZ device might be marked as cluster shareable, some firmware revisions on the HSZ preclude multi-initiators from probing the device at the same time. See the owner's manual for the HSZ device and check the Release Notes for any current restrictions.
The following example displays all the hardware components that have cluster-unique WWIDs:
# hwmgr show comp -cs HWID: HOSTNAME FLAGS SERVICE COMPONENT NAME ----------------------------------------------- 35: pmoba rcd-- iomap SCSI-WWID:0410004c:"DEC RZ28 ..." 36: pmoba -cd-- iomap SCSI-WWID:04100024:"DEC RZ25F ..." 42: pmoba rcd-- iomap SCSI-WWID:0410004c:"DEC RZ26L ..." 43: pmoba rcds- iomap SCSI-WWID:0410003a:"DEC RZ26L ..." 48: pmoba rcd-- iomap SCSI-WWID:0c000008:0000-00ff-fe00-0000 49: pmoba rcd-- iomap SCSI-WWID:04100020:"DEC RZ29B ..." 50: pmoba rcd-- iomap SCSI-WWID:04100026:"DEC RZ26N ..."
You might have a requirement to make a device available on a shared
bus even though it does not have a unique WWID.
Using such devices on a shared
bus is not recommended, but there is a method that enables you to create such
as configuration.
See
Section 5.4.4.10
for a description
of how you use the
hwmgr edit scsi
command option to create
a unique WWID.
5.2.4 Related Commands and Utilities
The following commands are also available to you for use in managing devices:
The system exerciser utilities enable you to
test devices for correct operation.
See
diskx
(8),
tapex
(8),
cmx
(8),
fsx
(8),
and
memx
(8).
See also
Chapter 12.
The
scu
command enables you to maintain and
diagnose problems with SCSI peripherals and the CAM I/O subsystem.
See
scu
(8)
and the online help for the command.
Use the
sysconfig
command
to query or modify the kernel subsystem configuration.
You use this command
to add subsystems to your running kernel, reconfigure subsystems already
in the kernel, ask for information about (query) subsystems in the kernel,
and unconfigure and remove subsystems from the kernel.
You can use the
sysconfig
command to set some component attribute values.
For information
on using the
sysconfig
command, refer to
Chapter 4,
which also documents the Kernel Tuner (dxkerneltuner
).
The Kernel Tuner is a graphical user interface that you can also use to modify
attribute values.
CDE Application Manager - SysMan Applications pop-up and System_Admin folders contain several hardware management tools, for example:
Configuration - Graphical user interfaces that you use to configure hardware such as ATM, Disk devices, Network devices, PPP (modem) devices, and LAT devices.
DailyAdmin - A graphical user interface for power management, which you use to set power attributes for certain devices.
SysMan Checklist, SysMan Menu, and SysMan Station - Provide interfaces to configure, monitor, and maintain system devices. You can invoke the SysMan Menu and SysMan Station from a variety of platforms, such as a personal computer or an X11-based environment. This enables you to perform remote monitoring and management of devices. See Chapter 1 for more information.
5.3 Using the SysMan Hardware Tasks
The SysMan Menu provides tasks that you can use for basic hardware management. You can also use the SysMan Station to obtain information about hardware components and to launch hardware management tasks.
The SysMan tasks provide you with a subset of the many more
hardware management features available from the command line when you use
the
hwmgr
command.
A more detailed discussion of the
hwmgr
command and its options is located in
Section 5.4.
See
hwmgr
(8)
for a complete listing of the command syntax and options.
Selecting the help option in one of the SysMan Menu hardware tasks invokes
the appropriate reference pages.
When you invoke the SysMan Menu as described in Chapter 1, hardware management options are available under the Hardware branch of the menu. Expanding this branch displays the following tasks:
View hardware hierarchy
View cluster
View device information
View central processing unit (CPU) information
Manage CPUs
Online Addition/Replacement (OLAR) policy information
These tasks launch basic hardware management tasks that are described in the following sections. See Managing Online Addition and Removal for information on online addition and removal (OLAR).
The following option buttons (or choices, in a terminal) are available in all the tasks:
Rerun - Runs the command again, updating the information in the display.
Stop - Stops the command. Use the Rerun option to update the information or choose OK to exit.
OK - Ends the task and closes the window.
Help - Displays the reference page.
5.3.1 Viewing the Hardware Hierarchy
The
"View hardware hierarchy"
task invokes the/sbin/hwmgr view hierarchy
command.
The following example shows
output from a single-CPU system that is not part of a cluster:
View hardware hierarchy HWID: hardware component hierarchy --------------------------------------------------- 1: platform AlphaServer 800 5/500 2: cpu CPU0 4: bus pci0 5: connection pci0slot5 13: scsi_adapter isp0 14: scsi_bus scsi0 30: disk bus-0-targ-0-LUN-0 dsk0 31: disk bus-0-targ-4-LUN-0 cdrom0 7: connection pci0slot6 15: graphics_controller trio0 9: connection pci0slot7 16: bus eisa0 17: connection eisa0slot9 18: serial_port tty00 19: connection eisa0slot10 display truncated
Use this task to display the hardware hierarchy for the entire system or cluster. The hierarchy shows every bus, controller, and other components on the system from the CPUs down to the individual peripheral components such as disks and tapes. On a system or cluster that has many devices, the output is lengthy and you might need to scroll the display to see components at the beginning of the output.
The output is useful because it provides you with component information
that you can specify with
hwmgr
command options to perform
hardware management operations such as viewing more component detail and adding
or deleting devices.
You can use the following items shown in the hierarchy
as command input:
HWID - The hardware identifier (or
id
),
an integer that is unique to each individual entry in the hierarchy.
The component name, such as
pci
for the
Peripheral Component Interconnect (PCI) bus.
The component basename, a mnemonic followed by an integer
that identifies the component such
cdrom0
, which relates
to the device special file for the component (/dev/disk/cdrom0
).
More information on device special file names is located in
Section 5.5.
The physical location attribute specifies the address or path
to a device, such as
bus-0-targ-0-LUN-0
, sometimes written
as
0/0/0
, which provides the following information:
scsi-0
is the bus and provides number of
the bus to which the component is attached.
targ-0
is the target number for this component
on the bus, in this case the first target on bus 0.
LUN-0
is the logical unit number (LUN),
in this case the first logical unit number at target 0 on bus 0.
The hardware category of a device, such as a
bus
or
ide_controller
.
Connections to slots, which show the slot number for a device,
such as
pci0slot5
and
eisa0slot9
.
Bus, controller, and component relationships, such as the
following sample output showing two disk devices on controller
scsi_adapter
isp0
which is on the bus
scsi_bus scsi0
:
13: scsi_adapter isp0 14: scsi_bus scsi0 30: disk bus-0-targ-0-LUN-0 dsk0 31: disk bus-0-targ-4-LUN-0 cdrom0
Because the same component might be shared (for example, on a shared bus) it might appear in the hierarchy more than once and has a unique identifier each time it appears. An example of shared devices is provided in Section 5.4.4.7.
You can use the information from the
view hierarchy
command output in other
hwmgr
commands when you want to
focus an operation on a specific hardware component, as shown in the following
command, which gets the value of a component attribute named
device_starvation_time
for the component with the HWID (id
) of 30.
Component 30 is the SCSI disk at bus 0, target 0 and LUN 0 in the example
hierarchy:
# /sbin/hwmgr get attr -id 30 -a device_starvation_time 30: device_starvation_time = 25 (settable)
The
output shows that the value of the
device_starvation_time
attribute is 25.
The label
(settable)
indicates that this
is a configurable attribute that you can set by using the following command
option:
# /sbin/hwmgr set attr -id 35 -a device_starvation_time=30
Understand
the impact of the changes before modifying the value of any component attribute.
See the documentation provided with a device.
5.3.2 Viewing the Cluster
Selecting the
"View cluster"
task invokes the command
/sbin/hwmgr view cluster
, directing the output to the SysMan Menu
window (or screen, if a terminal) as follows:
View cluster Starting /sbin/hwmgr view cluster ... /sbin/hwmgr view cluster run at Fri May 21 13:42:37 EDT 1999 Member ID State Member HostName --------- ----- --------------- 1 UP rene (localhost) 31 UP witt 10 UP rogr
If you attempt to run this command on a system that is not a member of a cluster, the following message is displayed:
hwmgr: This system is not a member of a cluster.
You can specify the
Member ID
and the
HostName
in some
hwmgr
commands when you want
to focus an operation on a specific member of a cluster, as shown in the following
example:
# /sbin/hwmgr scan scsi -member witt
5.3.3 Viewing Device Information
Selecting the
"View device information"
task invokes the
command
/sbin/hwmgr view devices
, directing the output
to the SysMan Menu window (or screen, if a terminal).
Use this option to display the component information
for the entire system or cluster.
The output shows every component and pseudo-device
(such as the
/dev/kevm
pseudo-device) that is connected
to system.
The following example shows the output from a small single-CPU
system that is not part of a cluster:
View device information Starting /sbin/hwmgr view devices ... /sbin/hwmgr view devices run at Fri May 21 14:20:08 EDT 1999 HWID: Device Special File Mfg Model Location Name ------------------------------------------------------------------ 3: /dev/kevm 28: /dev/disk/floppy0c 3.5in floppy fdi0-unit-0 30: /dev/disk/dsk0c DEC RZ1DF-CB(C)DEC bus-0-targ-0-LUN-0 31: /dev/disk/cdrom0c DEC RRD47 (C)DEC bus-0-targ-4-LUN-0
For
the purpose of this command, a component is any entity in the hierarchy that
has the attribute
dev_base_name
and has an associated device
special file (DSF).
The output from this command provides the following information
that you can use with the
hwmgr
command to perform hardware
management operations on the device:
HWID - The hardware identifier (or
id
),
an integer that is unique to each individual entry in the hierarchy,
The DSF Name, such as
/dev/disk/cdrom0c
.
In the case of disk devices, this is the name of the device special file associated
with the
c
partition that maps to the entire capacity of
the disk.
For a tape, it shows the device special file name that maps to the
default density for the device.
See
Section 5.5
for a description
of these names.
The model, which specifies a manufacturer model number or
a generic description such as
3.5in floppy
.
The physical location of a device, such as the SCSI
bus-0-targ-0-LUN-0
, sometimes written as 0/0/0, which specifies
the following:
bus-0
- The number of the bus to
which the component is attached, in this case, it is SCSI bus 0.
targ-0
- The target number for this
component on the bus, in this case the first target on the bus.
LUN-0
- The logical unit number,
in this case the first on the bus.
The previous output also shows a floppy disk attached to the
floppy disk interface,
fdi
as device 0, unit 0.
You can specify this information to certain
hwmgr
commands to perform hardware management operations on a particular device.
The following example of disk location specifies a device special file for
a disk, causing the light (LED) on that disk to flash for 30 seconds:
# /sbin/hwmgr flash light -dsf /dev/disk/dsk3 -nopause
The preceding command dos not work for CD-ROM readers
or for disks that are part of a managed array, such as an HSZ80.
5.3.4 Viewing CPU Information
Selecting the
"View central processing unit (CPU) information"
task invokes the command
/usr /sbin/psrinfo -v
,
directing the output to the SysMan Menu window (or screen, if a terminal).
Use this option to display the CPU status information, as shown in the following
sample output for a single-processor system.
The output from this task describes the processor and its status:
/usr/sbin/psrinfo Starting /usr/sbin/psrinfo -v ... /usr/sbin/psrinfo -v run at Fri May 21 14:22:05 EDT 1999 Status of processor 0 as of: 05/21/99 14:22:05 Processor has been on-line since 05/15/1999 14:42:28 The alpha EV5.6 (21164A) processor operates at 500 MHz, and has an alpha internal floating point processor.
5.3.5 Using the SysMan Station
The SysMan Station is a graphical user interface that runs under various windowing environments or from a web browser. See Chapter 1 and the online help for information on launching and using the SysMan Station.
Features of the SysMan Station that assist you in hardware management are as follows:
The SysMan Station provides a live view of system and component
status.
You can customize views to focus on parts of a system or cluster that
are of most interest to you.
You are notified when a hardware problem occurs
on the system by color changes to icons displayed by the GUI.
System views
are hierarchical, showing the complete system topology from CPUs down to discrete
components such as tapes.
You can observe the layout of buses, controllers,
and adapters and see their logical addresses.
You can see what components
are attached to each bus or controller, and their slot numbers.
Such information
is useful for running
hwmgr
commands from the command prompt.
You can select a component and
view detailed attributes of that device.
For example, if you select a SCSI
disk and press the right mouse button, a menu of options is displayed.
You
can choose to view the component properties for the selected disk.
If you
opt to do this, an extensive table of component properties is displayed.
This
action is the same as using the
hwmgr
command, as shown
in the following (truncated) sample output:
# hwmgr get attr -id 30 30: name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2 category = disk sub_category = generic architecture = SCSI phys_location = bus-0-targ-0-LUN-0 dev_base_name = dsk0 access = 7 capacity = 17773524 block_size = 512 open_part_mask = 59 disk_part_minor_mask = 4294967232 disk_arch_minor_mask = 4290774015 display truncated
When you select a device, you can also choose to launch a command and perform configuration or daily administrative operations on the selected device. For example, if you select a network adapter, you can configure its settings or perform related tasks such as configure the domain name server (DNS). You can launch the Event Viewer to see if any system events (such as errors) pertaining to this component are posted.
You can also run the SysMan Station from within Insight
Manager and use it from a PC, enabling you to remotely manage system hardware.
See
Chapter 1
for more information on remote management
options.
5.4 Using hwmgr to Manage Hardware
The principal command that you use to manage hardware
is the
hwmgr
command line interface (CLI).
Other interfaces,
such as the SysMan tasks provide a limited subset of the features
provided by
hwmgr
.
For example, you can use
hwmgr
to set an attribute for all components of a particular type (such
as SCSI disks) on all SCSI adapters in all members of a cluster.
Most hardware management is performed automatically by the system and you need only intervene under certain circumstances, such as replacing a failed component so that the replacement component takes on the identity of the failed device. The following sections provide information on:
Understanding the hardware management model
Understanding the principal user options available for the
hwmgr
command
Performing administrative tasks by using the
hwmgr
command
5.4.1 Understanding the Hardware Management Model
Within the operating system kernel, hardware data is organized as a
hardware set managed by the kernel set manager.
Application requests are passed
by library routines to kernel code, or remote code.
The latter deals with
requests to and from other systems.
The hardware component module (HWC) resides
in the kernel, and contains all the registration routines to create and maintain
hardware components in the hardware set.
It also contains the device nodes
for device special file management, which is performed by using the
dsfmgr
command.
The hardware set consists of data structures that describe all of the
hardware components that are part of the system.
A hardware component becomes
part of the hardware set when registered by its driver.
Many components support
attributes that describe their function and content or control how they operate.
Each attribute is assigned a value.
You can read, and sometimes manipulate,
these attribute values by using the
hwmgr
command.
The system hardware is organized into three parts, identified
as subsystems by the
hwmgr
command.
The subsystems are
identified as component , SCSI, and name.
The subsystems are related to the
system hardware databases as follows:
The component subsystem references all hardware components
specified in the (binary)
/etc/dec_hwc_ldb
and
/etc/dec_hwc_cdb
databases.
This includes most components on a
system.
The name subsystem references all hardware components in the
binary
/etc/dec_hw_db
database, often referred to as
the hardware topology.
The database contains hardware persistence information,
maintained by the kernel driver framework and includes data for buses, controllers
and devices.
The SCSI subsystem references all SCSI devices in the binary
/etc/dec_scsi_db
database.
The SCSI database contains
entries for all devices managed by the SCSI/CAM architecture.
The specific features of
hwmgr
are as follows:
It provides a wide range of hardware management functions under a single command.
It enables you to manage (to a small extent) hardware components that are currently not connected to your system but were seen on a previous boot.
It enables you to manage hardware components that are connected to multiple systems in a cluster.
It enables you to propagate a management request to multiple members of a cluster.
5.4.2 Understanding hwmgr Command Options
The
hwmgr
command works with the kernel hardware
management module, providing you with the ability to manage hardware components.
Examples of a hardware component are storage peripherals, such as a disk
or tape, or a system component such as a CPU or a bus.
Use the
hwmgr
command to manage hardware components on either a single system
or on a cluster.
Operational commands are characterized by a subsystem identifier after
the command name.
The subsystems are:
component
,
scsi
and
name
.
Some
hwmgr
operation commands are available for more
than one subsystem.
You should use the subsystem most closely associated with
the type of operation you want to perform, depending on the parameter information
that you obtained using the
view
and
show
command options.
Some commands require you to specify a subsystem name.
However, if you
specify the identity of a hardware component then you do not need to specify
a subsystem name.
The
hwmgr
command is able to determine
the correct subsystem on which to operate, based on the component identifier.
The command options are organized by task application. The list of command options, the subsystems on which they operate, and the nature of the operation is shown in the following table:
Option | Subsystem | Operation |
add |
name |
Database management |
delete |
component ,
name ,
and scsi |
Database management |
edit |
name ,
scsi |
Database management |
locate |
component |
Hardware configuration |
offline |
component ,
name |
Online Addition and Removal |
online |
component ,
name |
Online Addition and Removal |
power |
component ,
name |
Online Addition and Removal |
redirect |
scsi |
Hardware configuration |
refresh |
component ,
scsi |
Database management |
reload |
name |
Driver configuration |
remove |
name |
Database management |
scan |
component ,
name ,
and scsi |
Hardware configuration |
status |
component |
Hardware configuration |
unconfigure |
component ,
name |
Hardware configuration |
unindict |
component |
Online Addition and Removal |
unload |
name |
Driver configuration |
5.4.3 Configuring the hwmgr Environment
The
hwmgr
command provides environment settings that you can use to control the amount
of information displayed.
Use the following command to display the default
environment settings:
# hwmgr view env HWMGR_DATA_FILE = "/etc/hwmgr/hwmgr.dat" HWMGR_DEBUG = FALSE HWMGR_HEXINTS = FALSE HWMGR_NOWRAP = FALSE HWMGR_VERBOSE = FALSE
You can set the value of environment variables in your login script, or at the command line as shown in the following example:
# HWMGR_VERBOSE=TRUE # export HWMGR_VERBOSE
You usually need to define only the value of the
HWMGR_HEXINTS
HWMGR_NOWRAP
, and the
HWMGR_VERBOSE
environment variables as follows:
If the
HWMGR_HEXINTS
environment variable
is defined as
TRUE
, any numerical data output from the
hwmgr
command is displayed in hexadecimal numbers.
If the
HWMGR_NOWRAP
environment variable
is defined as
TRUE
, the output from the
hwmgr
command is truncated at 80 characters.
In some cases it is difficult
to read the output from
hwmgr
command options because it
wraps.
Setting the value of the
HWMGR_NOWRAP
environment
variable to
TRUE
makes the output more legible at the console.
A horizontal ellipsis marks truncated lines.
If the
HWMGR_VERBOSE
environment variable
is defined as
TRUE
, the output from the
hwmgr
command contains more detailed information.
The default setting
of the
hwmgr
command is to hide any errors that are not
critical.
To view more verbose information, you can also append the
verbose
switch to any of the
hwmgr
command options.
For example, if you query an attribute that does not exist for all
hardware components, by default the
hwmgr
command displays
only the output from hardware components that support the attribute, as shown
in the following example:
# /sbin/hwmgr get attr -a type 6: type = local 7: type = local 9: type = MOUSE
Not all hardware components on the system support
the
type
attribute, If the
HWMGR_VERBOSE
environment variable is not defined as
TRUE
the errors
generated by the preceding command are suppressed.
To see the errors, use
the
-verbose
switch with the command line as follows:
# hwmgr get attr -a type -verbose 1: Attribute "type" not defined. 2: Attribute "type" not defined. 4: Attribute "type" not defined. 5: Attribute "type" not defined. 6: current type = local 7: current type = local 8: Attribute "type" not defined. 9: current type = MOUSE 10: Attribute "type" not defined. 11: Attribute "type" not defined. . . (long display, output truncated)
You can use the
verbose
switch with all
hwmgr
commands, although it
does not always produce additional output.
5.4.4 Using hwmgr to Manage Hardware
The following sections contain examples of tasks that you might need
to perform by using the
hwmgr
command.
Some of these examples
might not be useful for managing a small server with a few peripheral devices.
However, when managing a large installation with many networked systems or
clusters with hundreds of devices, they become very useful.
Using the
hwmgr
command enables you to connect to an unfamiliar system, obtain
information about its component hierarchy, and then perform administrative
tasks without any previous knowledge about how the system is configured and
without consulting system logs or files to find devices.
5.4.4.1 Locating SCSI Hardware
The
locate
option, which currently works only for some SCSI devices, enables you to identify
a device.
You might use this command when you are trying to physically locate
a SCSI disk.
The following command flashes the light on a SCSI disk for one
minute:
# /sbin/hwmgr locate -id 42 -time 60
You can then check the disk bays for the component that
is flashing its light.
You cannot use this option to locate CD-ROM readers
and disks that are part of an array (such as an HSZ80).
5.4.4.2 Viewing the System Hierarchy
Use the
view
command to
view the hierarchy of hardware within a system.
This command enables you to
find what adapters are controlling devices, and discover where adapters are
installed on buses.
The following example shows typical output on a small
system that is not part of a cluster:
# hwmgr view hier HWID: hardware hierarchy ---------------------------------------------------- 1: platform AlphaServer 800 5/500 2: cpu CPU0 6: bus pci0 7: connection pci0slot5 15: scsi_adapter isp0 16: scsi_bus scsi0 32: disk bus-0-targ-0-lun-0 dsk0 33: disk bus-0-targ-4-lun-0 cdrom0 34: disk bus-0-targ-5-lun-0 dsk1 35: disk bus-0-targ-6-lun-0 dsk2 36: disk bus-0-targ-8-lun-0 dsk3 9: connection pci0slot6 17: graphics_controller s3trio0 output truncated
Some components might
appear as multiple entries in the hierarchy.
For example, if a disk is on
a SCSI bus that is shared between two adapters, the hierarchy shows two entries
for the same device.
You can obtain similar views of the system hardware hierarchy
by using the SysMan Station GUI.
5.4.4.3 Viewing System Categories
To perform hardware management options on all components of the same
category, or to select a particular component in a category, you might need
to know what categories of components are available.
The hardware manager
get category
command fetches all the possible values for hardware
categories.
This command is useful when you use it in conjunction with the
get attributes
and
set attributes
options, which
enable you to display and configure the attributes (or properties) of a particular
device.
When you know the hardware categories you can limit your attribute
queries to a specific type of hardware, as follows:
# /sbin/hwmgr get category Hardware Categories ------------------- category = undefined category = platform category = cpu category = pseudo category = bus category = connection category = serial_port category = keyboard category = pointer category = scsi_adapter category = scsi_bus category = network category = graphics_controller category = disk category = tape
Knowing the categories, you can focus your attribute query by specifying a category as follows:
# hwmgr get attr -category platform 1: name = AlphaServer 800 5/500 category = platform
This output informs you that the system platform has a hardware
ID of 1, and that the platform name is AlphaServer 800 5/500.
See also the
get attribute
and
set attribute
command options.
5.4.4.4 Obtaining Component Attributes
Attributes are characteristics of the component that might be read-only
information, such as the model number of the device, or you might be able
to set a value to control some aspect of the behavior of the device, such
as the speed at which it operates.
The
get attribute
command
option fetches and displays attributes for a component.
The hardware manager
command is specific to managing hardware and fetches only attributes from
the hardware set.
All hardware components are identified by a unique hardware
identifier, otherwise known as the hardware ID or HWID.
The following command fetches all attributes for all hardware components on the local system and directs the output to a file that you can search for information:
# hwmgr get attr > sysattr.txt
However, if you know which component category you want to query, as described in Section 5.4.4.3, you can focus your query on that particular category.
Querying a hardware component category for its attributes can provide
useful information.
For example, you might not be sure if the network is
working for some reason.
You might not even know what type of network adapters
are installed in a system or how they are configured.
Use the
get
attribute
option to determine the status of network adapters as
shown in the following example:
# hwmgr get attr -category network 203: name = ln0 category = network sub_category = Ethernet model = DE422 hardware_rev = firmware_rev = MAC_address = 08-00-2B-3E-08-09 MTU_size = 1500 media_speed = 10 media_selection = Selected by Jumpers/Switches media_type = loopback_mode = 0 promiscuous_mode = 0 full_duplex = 0 multicast_address_list = CF-00-00-00-00-00 \ 01-00-5E-00-00-01 interface_number = 1
This output provides you with the following information:
The fields and values listed below the HWID are the attribute
names and their current values.
Some values might be blank if they are not
initialized by the driver.
Using this information, you know that the system
has a model DE422 Ethernet adapter that has a component name of
ln0
.
You can then check the status of this network adapter by using
the
ifconfig
command, as follows:
# ifconfig ln0 ln0: flags=c62 inet XX.XXX.XXX.XX netmask ffffff00 \ broadcast XX.XXX.XX.XXX ipmtu 1500
In some cases, you can change the value of a component attribute
to modify component information or change its behavior on the system.
Setting
attributes is described in
Section 5.4.4.5.
To find which attributes
are settable, you can use the
get
option to fetch all attributes
and use the
grep
command to search for the for the
(settable)
keyword as follows:
# hwmgr get attr | grep settable device_starvation_time = 25 (settable) device_starvation_time = 0 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable) device_starvation_time = 25 (settable)
The output shows that there
is one settable attribute on the system,
device_starvation_time
.
Having found this, you can now obtain a list of components that support this
attribute as follows:
# hwmgr get attr -a device_starvation_time 23: device_starvation_time = 25 (settable) 24: device_starvation_time = 0 (settable) 25: device_starvation_time = 25 (settable) 31: device_starvation_time = 25 (settable) 34: device_starvation_time = 25 (settable) 35: device_starvation_time = 25 (settable)
The output from this command
displays the HWID of the components which support the
device_starvation_time
attribute.
Reading the HWID in the hierarchy output, you can determine
that this attribute is supported by SCSI disks.
See also the
set attribute
and
get category
options.
5.4.4.5 Setting Component Attributes
The
set attribute
command option allows you to set
(or configure) the value of settable attributes.
You cannot set all component
attributes.
When you use the
get attribute
command option,
the output flags any configurable attributes by labeling them as
(settable)
next to the attribute value.
A method of finding such
attributes is described in
Section 5.4.4.4.
As demonstrated in
Section 5.4.4.4, the value of
device_starvation_time
is an example of a settable attribute supported
by SCSI disks.
This attribute controls the amount of time that must elapse
before the disk driver determines that a component is unreachable due to SCSI
bus starvation (no data transmitted).
If the
device_starvation_time
expires before the driver is able to determine that the component
is still there, the driver posts an error event to the binary error log.
Using the following commands, you can change the value of the
device_starvation_time
attribute for the component with the HWID
of 24, and then verify the new value:
# hwmgr set attr -id 24 -a device_starvation_time=60 # hwmgr get attr -id 24 -a device_starvation_time 24: device_starvation_time = 60 (settable)
This action does not change
the
saved
value for this attribute.
All attributes have
three possible values, a
current
value, a
saved
value and a
default
value.
The
default
value is a constant and you cannot modify it.
If you never set
a value of an attribute, the default value applies.
When you set the
saved
value, it persists across boots.
You can think of it as a
permanent override of the
default
.
When you set the
current
value, it does not persist
across reboots.
You can think of it as a temporary value for the attribute.
When a system is rebooted, the value of the attribute reverts to the
saved
value (if there is a
saved
value).
If
there is no
saved
value the attribute value reverts to
the
default
value.
Setting an attribute value always changes
the
current
value of the attribute.
The following examples
show how you get and set the
saved
value of an attribute:
# hwmgr get attr saved -id 24 -a device_starvation_time 24: saved device_starvation_time = 0 (settable) # hwmgr get attr saved -id 24 -a device_starvation_time=60 saved device_starvation_time = 60 (settable) # hwmgr get attr saved -id 24 -a device_starvation_time 24: saved device_starvation_time = 60 (settable)
See also the
get attribute
and
get category
command options.
5.4.4.6 Viewing the Cluster
If you are working on a cluster, you often
need to focus hardware management commands at a particular host on the cluster.
The
view cluster
command option enables you to obtain details
of the hosts in a cluster.
The following sample output shows a typical cluster:
# /sbin/hwmgr view cluster Member ID State Member HostName --------- ----- --------------- 1 UP ernie.zok.paq.com (localhost) 2 UP bert.zok.paq.com 3 DOWN bigbird.zok.paq.com
You can also use this
option to verify that the
hwmgr
command is aware of all
cluster members and their current status.
The preceding example indicates a three member cluster with one member
(bigbird
) currently down.
The
(localhost)
marker tells us that
hwmgr
is currently running on cluster
member
ernie
.
Any
hwmgr
commands that
you enter by using the
-cluster
option are sent to members
bert
and
ernie
, but not to
bigbird
as that system is unavailable.
Additionally, any
hwmgr
commands that you issue with the
-member bigbird
option
fail because the cluster member state for that host is
DOWN
.
The
view cluster
command option works only if the
system is a member of a cluster.
If you attempt to run it on a single system
an error message is displayed.
See also the
clu_get_info
command, and refer to the TruCluster documentation for more information
on clustered systems.
5.4.4.7 Viewing Devices
You can use the
hwmgr
command to display all components
that have a device special file name, such as
/dev/disk/dsk34
by using the
view devices
option.
The hardware manager
considers any hardware component that has the attribute
dev_base_name
to be an accessible device.
(See
Section 5.4.4.4
for information on obtaining the attributes of a device.)
The
view devices
option enables you to determine
what components are currently registered with hardware management on a system,
provides information that enables you to access these components through their
device special file.
For example, if you load a CD-ROM into a reader, use
this output to determine whether you mount the CD-ROM reader as
/dev/disk/cdrom0
.
The
view devices
option
is also useful to find the HWIDs for any registered devices.
When you know
the HWID for a device, you can use other
hwmgr
command
options to query attributes on the device, or perform other operations on
the device.
Typical output from this command is shown in the following example:
# hwmgr view dev
HWID: DSF Name Mfg Model Location ---------------------------------------------------------------------- 3: /dev/kevm 22: /dev/disk/dsk0c DEC RZ26 bus-0-targ-3-LUN-0 23: /dev/disk/cdrom0c DEC RRD42 bus-0-targ-4-LUN-0 24: /dev/disk/dsk1c DEC RZ26L bus-1-targ-2-LUN-0 25: /dev/disk/dsk2c DEC RZ26L bus-1-targ-4-LUN-0 29: /dev/ntape/tape0 DEC TLZ06 bus-1-targ-6-LUN-0 35: /dev/disk/dsk8c COMPAQ RZ1CF-CF bus-2-targ-12-LUN-0
The output shows all hardware components that have the
dev_base_name
attribute on the local system.
The hardware manager attempts to
resolve the
dev_base_name
to the full path location to
the device special file, such as
/dev/ntape/tape0
.
It
always uses the path to the device special file with the
c
partition.
The
c
partition represents the entire capacity
of the device, except in the case of tapes.
See
Section 5.5
for information on device special file names and functions.
If you are working on a cluster, you can view all components registered
with hardware management across the entire cluster with the
-cluster
option, as follows:
# hwmgr view devices -cluster
HWID: DSF Name Model Location Hostname ------------------------------------------------------------------ 20: /dev/disk/floppy0c 3.5in fdi0-unit-0 tril7e 34: /dev/disk/cdrom0c RRD46 bus-0-targ-5-LUN-0 tril7e 35: /dev/disk/dsk0c HSG80 bus-4-targ-1-LUN-1 tril7d 35: /dev/disk/dsk0c HSG80 bus-6-targ-1-LUN-1 tril7e 36: /dev/disk/dsk1c RZ26N bus-1-targ-0-LUN-0 tril7e 37: /dev/disk/dsk2c RZ26N bus-1-targ-1-LUN-0 tril7e 38: /dev/disk/dsk3c RZ26N bus-1-targ-2-LUN-0 tril7e 39: /dev/disk/dsk4c RZ26N bus-1-targ-3-LUN-0 tril7e 40: /dev/disk/dsk5c RZ26N bus-1-targ-4-LUN-0 tril7e 41: /dev/disk/dsk6c RZ26N bus-1-targ-5-LUN-0 tril7e 42: /dev/disk/dsk7c RZ26N bus-1-targ-6-LUN-0 tril7e 43: /dev/disk/dsk8c HSZ40 bus-3-targ-2-LUN-0 tril7d 43: /dev/disk/dsk8c HSZ40 bus-3-targ-2-LUN-0 tril7e 44: /dev/disk/dsk9c HSZ40 bus-3-targ-2-LUN-1 tril7d 44: /dev/disk/dsk9c HSZ40 bus-3-targ-2-LUN-1 tril7e 45: /dev/disk/dsk10c HSZ40 bus-3-targ-2-LUN-2 tril7d 45: /dev/disk/dsk10c HSZ40 bus-3-targ-2-LUN-2 tril7e
Some
devices, such as the disk with the HWID of
45:
, appear
more than once in this display.
These are components that are on a shared
bus between two cluster members.
The hardware manager displays the component
entry as seen from each cluster member.
See also the following
hwmgr
command options:
show scsi
,
show components
, and
get
attributes
.
5.4.4.8 Viewing Transactions
Hardware management operations are transactions that must be synchronized
across a cluster.
The
view transaction
command option displays
the state of any hardware management transactions that have occurred since
you booted the system.
Use this option to check for failed hardware management
transactions.
If you do not specify the
-cluster
or
-member
option, the command displays status on transactions that are processed
or initiated by the local host (the system on which the command is entered).
The
view transaction
command option is primarily for debugging
problems with hardware management in a cluster, and you are likely to use
this command infrequently.
The command has the following typical output:
# /sbin/hwmgr view transactions hardware management transaction status ----------------------------------------------------- there is no active transaction on this system the last transaction initiated from this system was: transaction = modify cluster database proposal = 3834 sequence = 0 status = 0 the last transaction processed by this system was: transaction = modify cluster database proposal = 3834 sequence = 0 status = 0 proposal last status success fail ---------------------------- ----------- ------- ---- Modify CDB/ 3838 0 3 0 Read CDB/ 3834 0 3 0 No operation/ 3835 0 1 0 Change name/ 3836 0 0 0 Change name/ 3837 0 0 0 Locate HW/ 3832 0 0 0 Scan HW/ 3801 0 0 0 Unconfig HW - confirm/ 3933 0 0 0 Unconfig HW - commit/ 3934 0 0 0 Delete HW - confirm/ 3925 0 0 0 Delete HW - commit/ 3926 0 0 0 Redirect HW - confirm/ 3928 0 0 0 Redirect HW - commit1/ 3929 0 0 0 Redirect HW - commit2/ 3930 0 0 0 Refresh - lock/ 3937 0 0 0
From this
output you can tell that the last transaction that occurred describes a modification
of the cluster database.
5.4.4.9 Deleting a SCSI Device
Under some circumstances, you might want to remove a SCSI device from
a system, such as when it is logging errors and you must replace it.
Use the
delete scsi
command option to remove a SCSI component from all hardware
management databases cluster-wide.
This option unregisters the component from
the kernel, removes all persistent database entries for the device, and removes
all device special files.
When you delete a SCSI component it is no longer
accessible and its device special files are removed from the appropriate
/dev
subdirectory.
You cannot delete a SCSI component that is currently
open.
You must terminate all I/O connections to the device (such as mounts).
You might need to delete a SCSI component if you are removing it from
your system and you do not want information about the component remaining
on the system.
You might also want to delete a SCSI component because of
operating system, rather than hardware problems.
For example, if the component
operates correctly but you cannot access it through the device special file
for some reason.
In this case you can delete the component and use the
scan scsi
command option to find and register it.
To replace the SCSI component (or bring the old component back) you
can use the
scan scsi
command option to find the component
again.
However, when you delete a component and then perform a
scan
operation to bring the component back on line, it does not
always have the same device special file.
To replace a component as an exact
replica of the original, you must perform the additional operations described
in
Section 5.4.4.11.
There is also no guarantee that a
scan
operation can find the component if it is not actively responding
during the bus scan.
This option accepts the SCSI device identifier
-did
,
which is not equivalent to the HWID.
The following examples show how you check
the SCSI database and then delete a SCSI device:
# hwmgr show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOST- TYPE SUBTYPE OWNER PATH FILE VALID NAME PATH ----------------- ----------------------------------------- 23: 0 bert disk none 2 1 dsk0 [0/3/0] 24: 1 bert cdrom none 0 1 cdrom0 [0/4/0] 25: 2 bert disk none 0 1 dsk1 [1/2/0] 30: 4 bert tape none 0 1 tape2 [1/6/0] 31: 3 bert disk none 0 1 dsk4 [1/4/0] 34: 5 bert disk none 0 1 dsk7 [2/5/0] 35: 6 bert disk none 0 1 dsk8
In this example,
component ID 23 is currently open by a driver.
You can see this because the
DRIVER OWNER
field is not zero, Any number other than zero in the
DRIVER OWNER
field means that a driver has opened the component
for use.
Therefore, you cannot delete SCSI component 23 because it is currently
in use.
However, component ID 35 is not open by a driver, and it currently has
no valid paths shown in the
FIRST VALID PATH
field.
This
means that the component is not currently accessible and you can delete it
safely.
When you delete the device, you also delete the
/dev/disk/dsk8*
and
/dev/rdisk/dsk8*
device special files.
To delete the SCSI device, specify the SCSI DEVICEID value with the
delete
option, and then review the SCSI database as follows:
# hwmgr del scsi -did 6 hwmgr: The delete operation was successful. # hwmgr show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICE HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID ID PATH ------------------------------------------------------------- 23: 0 bert disk none 2 1 dsk0 [0/3/0] 24: 1 bert cdrom none 0 1 cdrom0 [0/4/0] 25: 2 bert disk none 0 1 dsk1 [1/2/0] 30: 4 bert tape none 0 1 tape2 [1/6/0] 31: 3 bert disk none 0 1 dsk4 [1/4/0] 34: 5 bert disk none 0 1 dsk7 [2/5/0]
The
component
/dev/disk/dsk8
is successfully deleted.
5.4.4.10 Creating a User-Defined SCSI Device Name
Most components have an identification attribute that is a unique to
the device.
You can read it as the
serial_number
or name
attribute of a SCSI device.
For example, the following
hwmgr
command returns both these attributes for the component with a HWID of 30,
a SCSI disk:
# hwmgr get attributes -id 30 -a serial_number -a name 30: serial_number = SCSI-WWID:0c000008:0060-9487-2a12-4ed2 name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2
This string is known as a worldwide identifier (WWID) because it is unique for each component on the system.
Some components do not provide a unique identifier.
The operating system
creates such a number for the component by using valid path
bus/target/LUN
data that describes the physical location of the device.
Because
systems can share devices, each system that has access to the component sees
a different path and creates its own unique WWID for that device.
There is
a possibility of concurrent I/O access to such shared devices, possibly resulting
in data corruption.
To check for such devices, use the following command:
# hwmgr show comp -cshared HWID: HOSTNAME FLAGS SERVICE COMPONENT NAME ----------------------------------------------- 40: joey -cd-- iomap SCSI-WWID:04100026:"DEC \ RZ28M (C) DEC00S846590H7CCX" 41: joey -cd-- iomap SCSI-WWID:04100026:"DEC \ RZ28L-AS (C) DECJEE019480P2VSN" 42: joey -cd-- iomap SCSI-WWID:0410003a:"DEC \ RZ28 (C) DECPCB=ZG34142470 ; HDA=000034579643" 44: joey rcd-- iomap SCSI-WWID:04100026:"DEC \ RZ28M (C) DEC00S735340H6VSR" . . .
Some devices, such as the TL895 model media changer, do not support INQUIRY pages 0x80 or 0x83 and are unable to provide the system with a unique WWID). To support features such as path failover or installation into a cluster on a shared bus, you must manually add such devices to the system. This is the recommended method to add only media changers to a shared bus, it is not recommended for other types of devices such as disks, CD-ROM readers, tape drives, or RAID controllers. Other devices provide a unique string, such as a serial number, from which the system can create a unique WWID. You can use such a component on a shared bus because its WWID is always the same and the operating system always recognizes it as the same device.
You can use the
hwmgr
command to create a user-defined
unique name that in turn enables you to create a WWID known to all systems
that are sharing the device.
Because the component has a common WWID it has
one set of device special file names, preventing the risk of concurrent I/O.
The process for creating a user-defined name is as follows:
Choose the name that you want to assign. This name should be unique within the scope of all systems that have access to the device. Although it need not be as long and complex as the WWIDs shown in the preceding example, it should be sufficiently long to provide the information that you need to recognize the renamed component and differentiate it from others.
Decide what component uses this name. When renamed, the component is seen as the same component on all systems. You must update the systems so that the component is seen.
Each system that shares the component creates a new WWID by
using the string and uses this new WWID for all subsequent registrations with
the system.
Internally, the component is still tracked by its default WWID
(if one existed).
However, all external representations display the new WWID
based on the user-defined name.
On a cluster you must run the
edit
scsi
command option on every cluster member that has access to the
device.
Caution
You must update all systems that have access to the device.
The following example shows how you assign a user-defined name.
Although
the
edit scsi
command option is recommended only for devices
that do not have a unique WWID, the example uses disks for the sake of simplicity.
# hwmgr show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOST TYPE SUBTYPE OWNER PATH FILE VALID ID NAME PATH ------------------------------------------------------------ 22: 0 ftwod disk none 0 1 dsk0 [0/3/0] 23: 1 ftwod cdrom none 0 1 cdrom0 [0/4/0] 24: 2 ftwod disk none 0 1 dsk1 [1/2/0] 25: 3 ftwod disk none 2 1 dsk2 [2/4/0]
This
command displays which SCSI devices are on the system.
On this system the
administrator knows that there is a shared bus and that hardware components
24 and 25 are actually the same device.
The WWID constructed for this component
is constructed by using the bus/target/LUN address information.
Because the
bus/target/LUN addresses are different, the component is seen as two separate
devices.
This can cause data corruption problems because the operating system
might use two different sets of device special files to access the disk (dev/disk/dsk1
and
/dev/disk/dsk2
).
The following command shows how you can rename the device, and how it appears after it is renamed:
# hwmgr edit scsi -did 2 -uwwid "this is a test" hwmgr: Operation completed successfully. # hwmgr show scsi -did 2 -full
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ------------------------------------------------------------------------- 24: 2 ftwod disk none 0 1 dsk1 [1/2/0] WWID:0910003c:"DEC (C) DECZG41400123ZG41800340:d01t00002l00000" WWID:ff10000e:"this is a test" BUS TARGET LUN PATH STATE ------------------------------ 1 2 0 valid
You repeat the operation on the other component path and the same name is given to the component at address 2/4/0. After you do this, hardware management uses your user-defined name to track the component and to recognize the alternate paths to the same device:
# hwmgr edit scsi -did 3 -uwwid "this is a test" hwmgr: Operation completed successfully. # hwmgr show scsi -did 3 -full
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH ------------------------------------------------------------------------- 25: 3 ftwod disk none 0 1 dsk1 [2/4/0] WWID:0910003c:"DEC (C) DECZG41400123ZG41800340:d02t00004l00000" WWID:ff10000e:"this is a test" BUS TARGET LUN PATH STATE ------------------------------ 2 4 0 valid
Both of these devices now use device
special file name
/dev/disk/dsk1
and there is no longer
a danger of data corruption as a result of two sets of device special files
accessing the same disk.
5.4.4.11 Replacing a Failed SCSI Device
When a SCSI disk fails, you might want to replace it in such a way that
the replacement disk takes on hardware characteristics of the failed device,
such as ownership of the same device special files.
The
redirect
command option enables you to assign such characteristics.
For
example, if you have an HSZ (RAID) cabinet and a disk fails, you can hot-swap
the failed disk and then use the
redirect
command option
to bring the new disk on line as a replacement for the failed disk.
Do not use this procedure alone if a failed disk is managed by an application such as AdvFS or LSM. Before you can swap managed disks, you must put the disk management application into an appropriate state or remove the disk from the management application. See the appropriate documentation, such as the Logical Storage Manager and AdvFS Administration guides.
Note
The replacement disk must be of the same type for the
redirect
operation to work.
The following example shows how you use the
redirect
option:
# /sbin/hwmgr show scsi SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICE- HOST- TYPE SUB- OWNER PATH FILE VALID ID NAME TYPE PATH --------------------------------------------------------- 23: 0 fwod disk none 2 1 dsk0 [0/3/0] 24: 1 fwod cdrom none 0 1 cdrom0 [0/4/0] 25: 2 fwod disk none 0 1 dsk1 [1/2/0] 30: 4 fwod tape none 0 1 tape2 [1/6/0] 31: 3 fwod disk none 0 1 dsk4 37: 5 fwod disk none 0 1 dsk10 [2/5/0]
This output shows a failed SCSI disk of HWID 31.
The component has no
valid paths.
To replace this failed disk with a new disk that has device special
file name
/dev/disk/dsk4
, and the same
dev_t
information, use the following procedure:
Install the component as described in the hardware manual.
Use the following command to find the new device:
# /sbin/hwmgr scan scsi
This
command probes the SCSI subsystem for new devices and registers those devices.
You can then repeat the
show scsi
command and obtain the
SCSI device id of the replacement device.
Use the following command to reassign the component characteristics
from the failed disk to the replacement disk.
This example assumes that the
SCSI device id (did
) assigned to the new disk is 36:
# /sbin/hwmgr redirect scsi -src 3 -dest 36
5.4.4.12 Using hwmgr to Replace a Cluster Member's Boot Disk
On a single system, the
hwmgr
command provides a
redirect
option which you use as part of the procedure to replace
a failing disk.
When you replace the failed disk, you use the
redirect
option to direct I/O from the failed component to the replacement
device.
This option redirects device special file names, cluster dev_t values,
local dev_t values, logical ID, and HWID.
Only unique device identifiers (did) are accepted by the
redirect
option.
In a cluster, device identifiers are not guaranteed
to be unique and the command might fail as shown in the following example:
# hwmgr redirect scsi -src source_did -dest target_did # "Error (95) Cannot start operation."
For
the redirect operation to succeed, both or neither of the hardware identifiers
must exist on each member of the cluster.
Use the following procedure to ensure
that the
redirect
operation works:
Verify whether the source and destination component exist. Use the following command on each member of the cluster:
# hwmgr show scsi -did device_identifier SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST HWID: DEVICEID HOST TYPE SUBTYPE OWNER PATH FILE VALID PATH 32: did rymoc disk none 2 1 dsk1 [0/1/0]
Follow this step only if the source component exists on other cluster members but the destination component does not.
Configure the destination component on those cluster members as follows:
# hwmgr scan scsi
Note
The bus scan is an asynchronous operation. The system prompt returns immediately but that does not mean that the scan is complete. On systems with many devices, the scan can take several minutes to complete.
Follow this step only if the destination component exists on other members of the system but the source component does not.
Delete the destination component from those cluster members as follows:
# hwmgr delete scsi did
You can now use the
redirect
option to
direct I/O to the replacement drive.
5.4.4.13 Viewing the Persistence Database for the name Subsystem
The name persistence database stores information
about the hardware topology of the system.
This data is maintained by the
kernel and includes data for controllers and buses in addition to devices.
Use the
show name
command option to display persistence
data that you can manipulate by using other
hwmgr
commands.
The following example shows typical output from the
show name
command option on a small system:
# hwmgr show name -member ychain HWID: NAME HOSTNAME PERSIST TYPE PERSIST AT ----------------------------------------------------- 13: isp0 ychain BUS pci0 slot 5 4: pci0 ychain BUS nexus 14: scsi0 ychain CONTROLLER isp0 slot 0 29: tu0 ychain CONTROLLER pci0 slot 11
The following information is provided by the output:
HWID:
- The unique hardware identifier
for this device.
You can also determine this by using the
view hierarchy
command option.
NAME
- The component name and the
instance number, such as
pci0
, for personal computer interconnect
(PCI) bus 0.
Each additional PCI bus has a different instance number.
HOSTNAME
- The host on which the
command is run.
When working in a cluster you can specify the cluster name
on which the command is to operate.
PERSIST TYPE
- The type of hardware
component, which is a bus, controller, or device.
PERSIST AT
- The logical address
of the device, which might map to a physical location in the hardware.
For
example, the SCSI controller
scsi0
persists at
slot 0
of the bus
isp0.
5.4.4.14 Deleting and Removing a Device from the name Persistence Database
One of the options for manipulating the name subsystem is to remove
components from the persistence database.
The
hwmgr
command
offers two methods of removal:
remove
- Use this option to take
an entry out of the persistence database.
This does not affect the running
system but at the next reboot the component is no longer seen.
delete
- Use this option to take
an entry out of the persistence database and delete it from the running system.
This command unregisters and unconfigures the device, removing it from all
hardware management databases.
The following example shows typical output from the
show name
command option on a small system.
You specify the variable
name, which is the component name shown in the output from the
show name
command option described in
Section 5.4.4.13:
# hwmgr show name HWID: NAME HOSTNAME PERSIST TYPE PERSIST AT ------------------------------------------------------ 33: aha0 fegin BUS eisa0 slot 7 31: ln0 fegin CONTROLLER eisa0 slot 5 8: pci0 fegin BUS ibus0 slot 0 34: scsi1 fegin CONTROLLER aha0 slot 0 17: scsi0 fegin CONTROLLER psiop0 slot 0 15: tu0 fegin CONTROLLER pci0 slot 0
There are two
SCSI adapters shown in the preceding output.
If
scsi0
is
the target of a
remove
operation then
scsi1
does not become
scsi0
.
The location of the adapter persists
at
aha0 slot 0
and the name
scsi1
is
saved across boots.
To remove
scsi0
and rename
scsi1
you use the following commands:
# hwmgr remove name -entry scsi0 # hwmgr edit name -entry scsi1 -parent_num 0
5.5 Device Naming and Device Special Files
Devices are made available to the rest of the system through device
special files located in the
/dev
directory.
A device special
file enables an application (such as a database application) to access a device
through its device driver, which is a kernel module that controls one or more
hardware components of a particular type.
For example, network controllers,
graphics controllers, and disks (including CD-ROM devices).
See
Section 5.4
for a discussion of system components.
The system uses
device special files to access pseudodevice drivers that do not control a
hardware component, for example, a pseudoterminal (pty
)
terminal driver, which simulates a terminal device.
The
pty
terminal driver is a character driver typically employed by remote logins;
it is described in
Section 5.6.
(For detailed information
on device drivers refer to the device driver documentation.)
Normally, device
special file management is performed automatically by the system.
For example,
when you install a new version of the UNIX operating system, there is a point
at which the system probes all buses and controllers and all the system devices
are found.
The system then builds databases that describe the devices and
creates device special files that make devices available to users.
The most
common way that you use a device special file is to specify it as the location
of a UFS file system in the system
/etc/fstab
file, which
is documented in
Chapter 6.
You need to perform manual operations on device special files only when there are problems with the system or when you need to support a device that the system cannot handle automatically. The following sections describe the way that devices and device special files are named and organized in Version 5.0 or higher. See Appendix B for information on other supported device mnemonics for legacy devices and their associated device names.
The following considerations apply in this release:
The name of a device special file for a SCSI device has the
format
/dev/disk/disk13a
for SCSI disks and
/dev/ntape/tape0_d0
for SCSI tapes.
The name of a SCSI device special
file in the format
/dev/rz10b
is a legacy device special
file.
The following sections differentiate between current and legacy device
special files.
You might also see these referred to as old (legacy) and new
(current) device names in some scripts and commands.
First time users of the
operating system need not be concerned with legacy device special file names
except where there is a need to use third-party drivers and devices that do
not support the current naming model.
(The structure of a device special file
is described in detail later in this section.)
There is currently one device special file naming model for SCSI disk and tape devices and a different model for all other devices. The naming system for SCSI disk and tape devices will be extended to the other devices in future releases. This ensures that there is continued support for legacy devices and device names on nonclustered systems. Applications and commands either support all device names or display an error message informing you of the supported device name formats.
Legacy device names and device special files will be maintained
for some time and their retirement schedule will be announced in a future
release.
5.5.1 Related Documentation and Commands
The following documents contain information about device names:
Books:
Chapter 6 contains information about context-dependent symbolic links (CDSLs). Some directories that contain device special files are CDSLs and you should be familiar with this concept before you proceed.
Reference pages and commands:
See
dsfmgr
(8)
for information on managing device special files
and replaces the
MAKEDEV
command.
(See
MAKEDEV
(8).)
See
disklabel
(8)
for information on maintaining disk pack labels.
See
diskconfig
(8)
for instructions on invoking the Disk Configuration GUI, a disk management
tool that provides additional features over
disklabel
.
You can use it to partition disks and create file systems on the disks in
a single operation.
You can also launch the Disk Configuration interface from
the CDE Application Manager - System_Admin folder.
The Disk Configuration
icon is located in the Configuration folder.
Online help describes how to
use this interface.
5.5.2 Device Special File Directories
To contain the device special files, a
/devices
directory
exists under the root directory (/
).
This directory contains
subdirectories that each contain device special files for a class of devices.
A class of device corresponds to related types of devices, such as disks or
nonrewind tapes.
For example, the
/dev/disk
directory contains
files for all supported disks, and the
/dev/ntape
directory
contains device special files for nonrewind tape devices.
In this release,
only the subdirectories for certain classes are created.
The available classes
are defined in
Appendix B.
For all operations you must
specify paths by using the
/dev
directory and not the
/devices
directory.
Note
Some device special file directories are CDSLs, which enable devices to be available cluster-wide when a system is part of a cluster. You should be familiar with the file system hierarchy described in Chapter 6, in particular the implementation of CDSLs.
From the
/dev
directory, there are symbolic links
to corresponding subdirectories to the
/devices
directory.
For example:
lrwxrwxrwx 1 root system 25 Nov 11 13:02 ntape ->
../../../../devices/ntape
lrwxrwxrwx 1 root system 25 Nov 11 13:02 rdisk ->
../../../../devices/rdisk
lrwxrwxrwx 1 root system 24 Nov 11 13:02 tape ->
../../../../devices/tape
This structure enables certain devices to be host-specific when the
system is a member of a cluster.
It enables other devices to be shared between
all members of a cluster.
In addition, new classes of devices might be added
by device driver developers and component vendors.
5.5.2.1 Legacy Device Special File Names
According to legacy device naming conventions, all device special files
are stored in the
/dev
directory.
The device special file
names indicate the device type, its physical location, and other device attributes.
Examples of the file name format for disk and tape device special file names
that use the legacy conventions are
/dev/rz14f
for a SCSI
disk and
/dev/rmt0a
for a tape device.
The name contains
the following information:
/path/prefix{root_name}{unit_number}{suffix} /dev/ rmt 0 a /dev/ r rz 4 c /dev/ n rmt 12 h
This information is interpreted as follows:
The
path
is the directory for device special
files.
All device special files are placed in the
/dev
directory.
The prefix differentiates one set of device special files for the same physical device from another set, as follows:
r
- Indicates a character (raw) disk
device.
Device special files for block devices have no prefix.
n
- Indicates a no rewind on close
tape device.
Device special files for rewind on close tape devices have no
prefix.
The
root_name
is the two or three-character
driver name, such as
rz
for SCSI disk devices, or
rmt
for tape devices.
The unit_number is the unit number of the device, as follows:
For SCSI disks, the unit number is calculated with the formula:
unit = (bus * 8) + target
For HSZ40 and HSZ10 disk devices, a letter can precede
the unit number to indicate the LUN, where
a
is LUN 0,
b
is LUN 1, and so on.
You do not need to include the letter
a
for LUN 0, it is the default.
For tapes, the prefix is a sequential number from 0 - 7.
The suffix differentiates multiple device special files for the same physical device, as follows:
Disks use the letters
a
through
h
to indicate partitions.
In all, 16 files are created for each
disk device: 8 for character device partitions
a
through
h
, 8 for block device partitions
a
through
h
.
Tapes
use suffixes to indicate tape densities.
Up to 8 files are created for each
tape device: two for each density, using the suffixes defined in
Table 5-1.
Table 5-1: Tape Device Suffix for Legacy Device Special Files
Suffix | Description |
a |
QIC-24 density for SCSI QIC devices. |
l |
The lowest density supported by the device, or QIC-120 density for SCSI QIC devices. |
m |
Medium density when a drive is triple density, or QIC-150 density for SCSI QIC devices. |
h |
The highest density supported by the device, or QIC-320 density for SCSI QIC devices. |
Legacy device naming conventions are supported so that scripts continue
to work as expected.
However, features available with the current device naming
convention might not work with the legacy naming convention.
When Version
5.0 or higher is installed, none of the legacy device special files (such
as
rz13d
) are created during the installation.
If you
determine that legacy device special file naming is required, you must create
the legacy device names by using the appropriate commands described in
dsfmgr
(8).
Some devices do not support legacy device special files.
5.5.2.2 Current Device Special File Names
Current device special files imply
abstract device names and convey no information about the device architecture
or logical path to the device.
The new device naming convention consists of
a descriptive name for the device and an instance number.
These two elements
form the basename of the device as shown in
Table 5-2.
Table 5-2: Sample Current Device Special File Names
Location in
/dev |
Device Name | Instance | Basename |
/disk |
dsk |
0 | dsk0 |
/rdisk |
dsk |
0 | dsk0 |
/disk |
cdrom |
1 | cdrom1 |
/tape |
tape |
0 | tape0 |
A combination of the device name, with a system-assigned instance
number creates a basename such as
dsk0
.
The current device special files are named according to the basename of the devices, and include a suffix that conveys more information about the addressed device. This suffix differs depending on the type of device, as follows:
Disks - These device file names consist of the basename
and a suffix from
a
through
z
.
For
example,
dsk0a
.
Disks use
a
through
h
to identify partitions.
By default, CD-ROM and floppy disk devices
use only the letters
a
and
c
.
For example,
cdrom1c
and
floppy0a
.
The same device names exist in the class directory
/dev/rdisk
for raw devices.
Tapes -
These device file names have the basename and a suffix comprised of the characters
_d
followed by an integer.
For example
tape0_d0
.
This suffix determines the density of the tape device, according to the entry
for the device in the
/etc/ddr.dbase
file.
For example:
Device | Density |
tape0 |
Default density |
tape0c |
Default density with compression |
tape0_d0 |
Density associated with entry 0 in
/etc/ddr.dbase |
tape0_d1 |
Density associated with entry 1 in
/etc/ddr.dbase |
Using the new device special file naming, there is a direct mapping from the legacy tape device name suffix to the current name suffix as follows:
Legacy Device Name Suffix | Current Suffix |
l (low) | _d0 |
m (medium) | _d2 |
h (high) | _d1 |
a (alternate) | _d3 |
There are two sets of device names for tape that both conform to the
current naming convention.
The
/dev/tape
directory for
rewind devices and the
/dev/ntape
directory (for no rewind).
To determine the correct device special file to use, you can look in the
/etc/ddr.dbase
file.
5.5.2.3 Converting Device Special File Names
If you have shell scripts that use commands that act on device special files, be aware that any command or utility supplied with the operating system operates on current and legacy file names in one of the following ways:
The utility accepts both forms of device name.
Only the current device names are supported by the utility. If you use legacy device names, you cannot use the command.
Only the legacy device names are supported by the utility. If you use current device names, you cannot use the command.
No device can use both forms of device names simultaneously. Test your shell scripts for compliance with the device naming methods. Refer to the individual reference pages or the online help for a command.
If you want to update scripts, translating legacy names to the equivalent
current name is a simple process.
Table 5-3
shows some examples
of legacy device names and corresponding current device names.
There is no
relationship between the instance numbers.
A device associated with legacy
device special file
/dev/rz10b
might be associated with
/dev/disk/dsk2b
under the current system.
Using these names as examples, you can translate device names that appear
in your scripts.
You can also use the
dsfmgr
(8)
command to convert
device names.
Table 5-3: Sample Device Name Translations
Legacy Device Special File Name | New Device Special File Name |
/dev/rmt0a |
/dev/tape/tape0 |
/dev/rmt1h |
/dev/tape/tape1_d1 |
/dev/nrmt0a |
/dev/ntape/tape0_d0 |
/dev/nrmt3m |
/dev/ntape/tape3_d2 |
/dev/rz0a |
/dev/disk/dsk0a |
/dev/rz10g |
/dev/disk/dsk10g |
/dev/rrz0a |
/dev/rdisk/dsk0a |
/dev/rrz10b |
/dev/rdisk/dsk10b |
5.5.3 Managing Device Special Files
In most cases, the management of device special files is undertaken
by the system itself.
During the initial full installation of the operating
system, the device special files are created for every SCSI disk and SCSI
tape device found on the system.
If you updated the operating system from
a previous version by using the update installation procedure, both the current
device special files and the legacy device files might exist.
However, if
you subsequently add new SCSI devices the
dsfmgr
command
creates only the new device special files by default.
When the system is rebooted,
the
dsfmgr
command is called automatically during the boot
sequence to create the new device special files for the device.
The system
also automatically creates the device special files that it requires for pseudodevices
such as ptys (pseudoterminals).
When you add a SCSI disk or tape device to the system, the new device
is found and recognized automatically, added to the hardware management databases,
and its device special files created.
On the first reboot after installation
of the new device, the
dsfmgr
command is called automatically
during the boot sequence to create the new device special files for that device.
To support applications that work only with legacy device names, you might need to manually create the legacy device special files, either for every existing device, or for recently-added devices only. Some recent devices that support features such as Fibre Channel can use only the current special device file naming convention.
The following sections describe some typical uses of the
dsfmgr
command.
See
dsfmgr
(8)
for detailed information on the
command syntax.
The system script file
/sbin/dn_setup
,
which runs at boot time to create device special files, provides an example
of a script that uses
dsfmgr
command options.
5.5.3.1 Using dn_setup to Perform Generic Operations
The
/sbin/dn_setup
script runs automatically at system
start up to create device special file names.
Normally, you do not need to
use the
dn_setup
command.
It is useful if you need to troubleshoot
device name problems or restore a damaged special device file directory or
database files.
See also
Section 5.5.3.3.
If you frequently change your system configuration or install different versions of the operating system you might see device-related error messages at the system console during system start up. These messages might indicate that the system is unable to assign device special file names. This problem can occur when the saved configuration does not map to the current configuration. Adding or removing devices between installations can also cause the problem.
The
dn_setup
script has
the following functions.
Generally, the
-sanity_check
option
alone is useful to administrators.
Use the remaining options under the guidance
of technical support for debugging and problem solving.
The options are as
follows:
Verifies the consistency and currency of
the device special files and the directory hierarchy.
The message
Passed
is displayed if the check is successful.
Runs at boot time to create all the default device special databases, files, and directories.
Creates the required device special directories only.
Deletes everything in the device special directory tree and recreates the entire tree (including device special files).
Creates the class and category databases only.
Removes all the default device special databases, files, and directories and recreates everything.
5.5.3.2 Displaying Device Classes and Categories
Any individual type of
device on the system is identified in the Category to Class-Directory, Prefix
Database file,
/etc/dccd.dat
.
You can display information
in these databases by using the
dsfmgr
command.
This information
enables you to find out what devices are on a system, and obtain device identification
attributes that you can use with other
dsfmgr
command options.
For example, you can find a class of devices that have related physical characteristics,
such as being disk devices.
Each class of devices has its own directory in
/dev
such as
/dev/ntape
for nonrewind tape devices.
Device classes are stored in the Device Class Directory Default Database file,
/etc/dcdd.dat
.
Use the following command to view the entries in the databases:
# /sbin/dsfmgr -s
dsfmgr: show all datum for system at / Device Class Directory Default Database: # scope mode name -- --- ---- ----------- 1 l 0755 . 2 c 0755 disk 3 c 0755 rdisk 4 c 0755 tape 5 c 0755 ntape 6 l 0755 none Category to Class-Directory, Prefix Database: # category sub_category type directory iw t mode prefix -- -------------- -------------- ---------- --------- -- - ---- -------- 1 disk cdrom block disk 1 b 0600 cdrom 2 disk cdrom char rdisk 1 c 0600 cdrom 3 disk floppy block disk 1 b 0600 floppy 4 disk floppy char rdisk 1 c 0600 floppy 5 disk floppy_fdi block disk 1 b 0666 floppy 6 disk floppy_fdi char rdisk 1 c 0666 floppy 7 disk generic block disk 1 b 0600 dsk 8 disk generic char rdisk 1 c 0600 dsk 9 parallel_port printer * . 1 c 0666 lp 10 pseudo kevm * . 0 c 0600 kevm 11 tape * norewind ntape 1 c 0666 tape 12 tape * rewind tape 1 c 0666 tape 13 terminal hardwired * . 2 c 0666 tty 14 * * * none 1 c 0000 unknown Device Directory Tree: 12800 2 drwxr-xr-x 6 root system 2048 May 23 09:38 /dev/. 166 1 drwxr-xr-x 2 root system 512 Apr 25 15:58 /dev/disk 6624 1 drwxr-xr-x 2 root system 512 Apr 25 11:37 /dev/rdisk 180 1 drw-r--r-- 2 root system 512 Apr 25 11:39 /dev/tape 6637 1 drw-r--r-- 2 root system 512 Apr 25 11:39 /dev/ntape 181 1 drwxr-xr-x 2 root system 512 May 8 16:48 /dev/none Dev Nodes: 13100 0 crw------- 1 root system 79, 0 May 8 16:47 /dev/kevm 13101 0 crw------- 1 root system 79, 2 May 8 16:47 /dev/kevm.pterm 13102 0 crw-r--r-- 1 root system 35, 0 May 8 16:47 /dev/tty00 13103 0 crw-r--r-- 1 root system 35, 1 May 8 16:47 /dev/tty01 13104 0 crw-r--r-- 1 root system 34, 0 May 8 16:47 /dev/lp0 169 0 brw------- 1 root system 19, 17 May 8 16:47 /dev/disk/dsk0a 6627 0 crw------- 1 root system 19, 18 May 8 16:47 /dev/rdisk/dsk0a 170 0 brw------- 1 root system 19, 19 May 8 16:47 /dev/disk/dsk0b 6628 0 crw------- 1 root system 19, 20 May 8 16:47 /dev/rdisk/dsk0b 171 0 brw------- 1 root system 19, 21 May 8 16:47 /dev/disk/dsk0c
.
.
.
This display provides you with information that you can use with other
dsfmgr
commands.
(See
dsfmgr
(8)
for a complete description of the
fields in the databases).
For example:
class
- The device class such as
disk
(a block device),
rdisk
(a character device),
or
tape
, a rewind device.
Use this information with the
dsfmgr -a
(add) or
dsfmgr -r
(remove) command
options to add or remove classes.
category
- The primary description
of a device.
For example, SCSI disks, CD-ROM readers and floppy disk readers
are all in the
disk
category.
Use this information with
the
dsfmgr -a
(add) or
dsfmgr -r
(remove)
command options to add or remove categories.
5.5.3.3 Verifying and Fixing the Databases
Under unusual circumstances, the device databases
might be corrupted or device special files might be accidentally removed from
the system.
You might see errors indicating that a device is no longer available,
but the device itself does not appear to be faulty.
If you suspect that there
might be a problem with the device special files, you can check the databases
by using the
dsfmgr -v
(verify) command option.
Caution
If you see error messages at system start up that indicate a device naming problem, use the verify command option to enable you to proceed with the boot. Check your system configuration before and after verifying the databases. The verification procedure fixes most errors and enables you to proceed, however it does not cure any underlying device or configuration problems.
Such problems are rare and usually arise when performing unusual operations such as switching between boot disks. Errors generally mean that the system is unable to recover and use a good copy of the previous configuration, and errors usually arise because the current system configuration no longer matches the database.
As for all potentially destructive system operations, ensure that you are able to restore the system to its identical previous configuration, and to restore the previous version of the operating system from your backup.
For example, if you attempt to configure the floppy disk device to use
the
mtools
commands, and you find that you cannot access
the device, use the following
dsfmgr
command to help diagnose
the problem:
# /sbin/dsfmgr -v dsfmgr: verify all datum for system at / Device Class Directory Default Database: OK. Device Category to Class Directory Database: OK. Dev directory structure: OK. Dev Nodes: ERROR: device node does not exist: /dev/disk/floppy0a ERROR: device node does not exist: /dev/disk/floppy0c Errors: 2 Total errors: 2
This output shows that the device special files
for the floppy disk device are missing.
To correct this problem, use the same
command with the
-F
(fix) flag to correct the errors as
follows:
# /sbin/dsfmgr -v -F dsfmgr: verify all datum for system at / Device Class Directory Default Database: OK. Device Category to Class Directory Database: OK. Dev directory structure: OK. Dev Nodes: WARNING: device node does not exist: /dev/disk/floppy0a WARNING: device node does not exist: /dev/disk/floppy0c OK. Total warnings: 2
In the preceding output, the
ERROR
changes to a
WARNING
, indicating that the device special
files for the floppy disk are created automatically.
If you Repeat the
dsfmgr -v
command, no further errors are displayed.
5.5.3.4 Deleting Device Special Files
If a device is permanently removed from the system,
you can remove its device special file to reassign the file to another type
of device.
Use the
dsfmgr -D
command option to remove device
special files, as shown in the following example:
# ls /dev/disk cdrom0a dsk0a dsk0c dsk0e dsk0g floppy0a cdrom0c dsk0b dsk0d dsk0f dsk0h floppy0c # /sbin/dsfmgr -D /dev/disk/cdrom0* -cdrom0a -cdrom0a -cdrom0c -cdrom0c # ls /dev/disk dsk0a dsk0c dsk0e dsk0g floppy0a dsk0b dsk0d dsk0f dsk0h floppy0c
The output from the
ls
command shows that there are device special files for
cdrom0
.
Running the
dsfmgr -D
command option
on all
cdrom
devices, as shown by the wildcard symbol (*),
causes all device special files for that sub-category to be permanently deleted.
The message that follows repeats the basename (cdrom0
)
twice, because it also deletes the device special files from the
/dev/rdisk
directory where the raw or character device special files
are located.
If device special files are deleted in error, and no hardware changes are made, recreate the files as follows:
# /sbin/dsfmgr -n cdrom0a +cdrom0a +cdrom0a # /sbin/dsfmgr -n cdrom0c +cdrom0c +cdrom0c
5.5.3.5 Moving and Exchanging Device Special File Names
You might want to move (reassign) the
device special files between devices by using the
dsfmgr -m
(move) command option.
You can also exchange the device special files of one
device for those of another device by using the
-e
option.
For example:
# /sbin/dsfmgr -m dsk0 dsk10 # /sbin/dsfmgr -e dsk1 15
5.6 Manually Configuring Devices Using ddr_config
Most device management is automatic.
A device added
to a system is recognized, mapped, and added to the device databases as described
in
Section 5.4.
However, you might sometimes need to add
devices that the system cannot detect and add to the system automatically.
These devices might be old, or new prototypes, or they might not adhere closely
to supported standards such as SCSI.
In these cases, you must manually configure
the device and its drivers in the kernel, by using the
ddr_config
command described in this section.
The following sections describe how to create pseudoterminals (ptys), a terminal pseudodevice that enables remote logins.
There are two processes you use to effect the reconfiguration and rebuilding of a kernel:
The dynamic method uses the
ddr
command to rebuild the kernel and effect the disk configuration changes without
shutting down the operating system.
The static method requires that you use the
MAKEDEV
and
config
commands.
You must also shut
down the system and restart it to rebuild the kernel and effect the changes.
Use the
MAKEDEV
command or the
mknod
command to create device special files
instead of using the
dsfmgr
command.
The
kmknod
command creates device special files for third-party kernel layered
products.
See
MAKEDEV
(8),
mknod
(8), and
kmknod
(8)
for more information.
For loadable
drivers, the
sysconfig
command creates the device special
files by using the information specified in the driver's stanza entry in the
/etc/sysconfigtab
database file.
5.6.1 Dynamic Method to Reconfigure the Kernel
The following sections explain how to use the
ddr_config
command to manage the Dynamic Device Recognition (DDR)
database for your system.
These sections introduce DDR, then describe how
you use the
ddr_config
command to:
5.6.1.1 Understanding Dynamic Device Recognition
DDR is a framework for describing the operating parameters and characteristics of SCSI devices to the SCSI CAM I/O subsystem. You can use DDR to add new and changed SCSI devices to your system without rebooting the operating system. You do not disrupt user services and processes, as happens with static methods of device recognition.
DDR is preferred over the static method for recognizing SCSI devices.
The current, static method, as described in
Chapter 4,
is to edit the
/sys/data/cam_data.c
data file and include
custom SCSI device information, reconfigure the kernel, and shut down and
reboot the operating system.
Note
Support for the static method of recognizing SCSI devices will be retired in a future release.
You can use both methods on the same system, with the restriction that the devices described by each method are exclusive to that method (nothing is doubly-defined).
The information DDR provides about SCSI devices is needed by SCSI drivers.
You can supply this information by using DDR when you add new SCSI devices
to the system, or you can use the
/sys/data/cam_data.c
data file and static configuration methods.
The information provided by DDR
and the
cam_data.c
file have the same objectives.
When
compared to the static method of providing SCSI device information, DDR minimizes
the amount of information that is supplied by the device driver or subsystem
to the operating system and maximizes the amount of information that is supplied
by the device itself or by defaults specified in the DDR databases.
5.6.1.1.1 Conforming to Standards
Devices you add to the system should minimally conform to the SCSI-2
standard, as specified in
SCSI-2, Small Computer System Interface-2 (X3.131-1994), or other variants of the standard documented in
the
Software Product Description.
If your devices do
not comply with the standard, or if they require exceptions from the standard,
you store information about these differences in the DDR database.
If the
devices comply with the standard, there is usually no need to modify the database.
The system will automatically recognize such devices and you can configure
them by using the
hwmgr
command.
5.6.1.1.2 Understanding DDR Messages
Following are the most common DDR message categories and the action, if any, that you should take.
Console messages are displayed during the boot sequence. Frequently, these messages indicate that the kernel cannot read the DDR database. This error occurs when the system's firmware is not at the proper revision level. Upgrade to the correct revision level of the firmware.
Console messages warn about corrupted entries in the database. Recompile and regenerate the database.
Runtime messages generally indicate syntax errors that are
produced by the
ddr_config
compiler.
The compiler runs
when you use the
-c
option to the
ddr_config
command and does not produce an output database until all syntax
errors are corrected.
Use the
-h
option to the
ddr_config
command to display help on command options.
5.6.2 Changing the DDR Database
When
you make a change to the operating parameters or characteristics of a SCSI
device, you must describe the changes in the
/etc/ddr.dbase
file.
You must compile the changes by using the
ddr_config -c
command.
Two common reasons for changes are:
Your device deviates from the SCSI standard or reports something different from the SCSI standard.
You want to optimize device defaults, most commonly the
TagQueueDepth
parameter, which specifies the maximum number of active
tagged requests the device supports.
You use the
ddr_config
-c
command to compile the
/etc/ddr.dbase
file
and produce a binary database file,
/etc/ddr.db
.
When the
kernel is notified that the file's state has changed, it loads the new
/etc/ddr.dbase
file.
In this way, the
SCSI CAM I/O subsystem is dynamically updated with the changes that you made
in the
/etc/ddr.dbase
file and the contents of the on-disk
database are synchronized with the contents of the in-memory database.
Use the following procedure to compile the
/etc/ddr.dbase
database:
Log in as root or become the superuser.
Enter the
ddr_config -c
command, for example:
# /sbin/ddr_config -c
There is no message confirming successful completion.
When
the prompt is displayed, the compilation is complete.
If there are syntax
errors, they are displayed at standard output and no output file is compiled.
5.6.3 Converting Customized cam_data.c Information
You use the following procedure to
transfer customized information about your SCSI devices from the
/sys/data/cam_data.c
file to the
/etc/ddr.dbase
text database.
In this example,
MACHINE
is the
name of your machine's system configuration file.
Log on as root or become the superuser.
To produce a summary of the additions and modifications that
you should make to your
/etc/ddr.dbase
file, enter the
ddr_config -x
command.
For example:
# /sbin/ddr_config -x MACHINE > output.file
This command uses as input the system configuration file. (You specify the configuration file when you build your running kernel.) The procedure runs in multiuser mode and requires no input after it starts. Redirect output to a file to save the summary information. Compile errors are reported to standard error and the command terminates when the error is reported. Warnings are reported to standard error and do not terminate the command.
Edit the characteristics that are listed on the output file
into the
/etc/ddr.dbase
file, following the syntax requirements
of that file.
Instructions for editing the
/etc/ddr.dbase
database are found in
ddr.dbase
(4).
Enter the
ddr_config -c
command to compile
the changes.
See Section 5.6.2 for more information.
You can add pseudodevices, disks, and tapes statically (without using
DDR) by using the methods described in the following sections.
5.6.4 Adding Pseudoterminals and Devices Without Using DDR
System V Release 4 (SVR4) pseudoterminals (ptys) are implemented by default and are defined as follows:
/dev/pts/N
The variable N is a number from 0-9999.
This implementation allows for more scalability than the BSD ptys (tty[a-zA-Z][0-9a-zA-Z]).
The base system commands and utilities support both SVR4 and BSD ptys.
To
revert back to the original default behavior, create the BSD ptys by using
the
MAKEDEV
command.
See also
SYSV_PTY
(8),
pty
(7),
and
MAKEDEV
(8).
5.6.4.1 Adding Pseudoterminals
Pseudoterminals enable users to use the network to access a system.
A pseudoterminal is a pair of character devices that emulate a hardware terminal
connection to the system.
Instead of hardware, however, there is a master
device and a slave device.
Pseudoterminals, unlike terminals, have no corresponding
physical terminal port on the system.
Remote login sessions, window-based
software, and shells use pseudoterminals for access to a system.
By default,
SVR4 device special files such as
/dev/pts/
n
are created.
You must use
/dev/MAKEDEV
to create
BSD pseudoterminals such as
/dev/ttyp/
n.
Two implementations of pseudoterminals are offered: BSD STREAMS and BSD
clist
.
For some installations, the default number of
pty
devices is adequate.
However, as your user community grows, and each user
wants to run multiple sessions of one or more timesharing machines in your
environment, the machines might run out of available
pty
lines.
The following command enables you to review the current value:
# sysconfig -q pts pts: nptys = 255
You can dynamically change the value
with the
sysconfig
command, although this change is not
preserved across reboots:
# sysconfig -r pts nptys=400
To modify the value and preserve it across reboots, use the following procedure:
Log in as superuser (root).
Add or edit the pseudodevice
entry in the system configuration file
/etc/sysconfigtab
.
By default, the kernel supports 255 pseudoterminals.
If you add more pseudoterminals
to your system, you must edit the system configuration file entry and increment
the number 255 by the number of pseudoterminals you want to add.
The following
example shows how to add 400 pseudoterminals:
pts: nptys=400
The pseudodevice entry for
clist-
based pseudoterminals is as follows:
pseudo-device pty 655
For more information on the configuration file and its pseudodevice keywords, refer to Chapter 4.
For
clist-
based pseudoterminals, you also
need to rebuild and boot the new kernel.
Use the information on rebuilding
and booting the new kernel in
Chapter 4.
When the system is first installed, the configuration file contains a pseudodevice entry with the default number of 255 pseudoterminals. If for some reason the number is deleted and not replaced with another number, the system defaults to supporting the minimum value of 80 pseudoterminals. The maximum value is 131072.
If you want to create BSD terminals, use the
/dev/MAKEDEV
command as follows:
Log in as root and change to the
/dev
directory.
Create
the device special files by using the
MAKEDEV
command,
as follows:
./MAKEDEV pty_#
The number sign ( #
)
represents the set of pseudoterminals (0 to 101) you want to create.
The
first 51 sets (0 to 50) create 16 pseudoterminals for each set.
The last
51 sets (51 to 101) create 46 pseudoterminals for each set.
See
MAKEDEV
(8)
for instructions on making a large number of pseudoterminals.
(See the Software
Product Description (SPD) for the maximum number of supported pseudoterminals).
Note
By default, the installation software creates device special files for the first two sets of pseudoterminals,
pty0
andpty1
. Thepty0
pseudoterminals have corresponding device special files named/dev/ttyp0
through/dev/ttypf
. Thepty1
pseudoterminals have corresponding device special files named/dev/ttyq0
through/dev/ttyqf
.
If you add pseudoterminals to your system, the
pty#
variable must be higher than
pty1
because the installation software sets
pty0
and
pty1
.
For example, to create device special files for a third set
of pseudoterminals, enter:
# ./MAKEDEV pty2
The
MAKEDEV
command
lists the device special files it has created.
For example:
MAKEDEV: special file(s) for pty2: ptyr0 ttyr0 ptyr1 ttyr1 ptyr2 ttyr2 ptyr3 ttyr3 ptyr4 ttyr4 ptyr5 ttyr5 ptyr6 ttyr6 ptyr7 ttyr7 ptyr8 ttyr8 ptyr9 ttyr9 ptyra ttyra ptyrb ttyrb ptyrc ttyrc ptyrd ttyrd ptyre ttyre ptyrf ttyrf
If
you want to allow root logins on all pseudoterminals, make sure an entry for
ptys is present in the
/etc/securettys
file.
If you do
not want to allow root logins on pseudoterminals, delete the entry for ptys
from the
/etc/securettys
file.
For example, to add the
entries for the new tty lines and to allow root login on all pseudoterminals,
enter the following lines in the
/etc/securettys
file:
/dev/tty08 # direct tty /dev/tty09 # direct tty /dev/tty10 # direct tty /dev/tty11 # direct tty ptys
See
securettys
(4)
for more information.
When
you add new SCSI devices to your system, the system automatically detects
and configures them.
It runs the appropriate
hwmgr
and
dsfmgr
commands to register the devices, assign identifiers, and
create device special files.
However, you might need to manually create device
names for other devices by using the
MAKEDEV
command.
You
might also need to recreate device special files that you incorrectly deleted
from the system.
For new devices, you must physically connect the devices and then make the devices known to the system. There are two methods, one for static drivers and another for loadable drivers. Before adding a device, read the owner's manual that came with your system processor and any documentation that came with the device itself. You might also require a disk containing the driver software.
Appendix D provides an outline example of adding a PCMCIA modem to a system, and shows you how to create the device special files.
It is not necessary to use the
MAKEDEV
command if
you simply want to create legacy
rz
or
tz
device special files in
/dev
such as
/dev/rz5
.
The
dsfmgr
command provides a method of creating
these device names.
To add a device for a loadable
driver, see the device driver documentation.
To add a device for a static driver, see Section 5.6.4.1.
Next, make the device special files for the device, by following these steps:
Change to the
/dev
directory.
Create the device special files by using the
MAKEDEV
command as follows:
# ./MAKEDEV device#
The
device
variable
is the device mnemonic for the drive you are adding.
Appendix B
lists the device mnemonics for all supported disk and tape drives.
The number
sign ( #
) is the number of the device.
For
example, to create the device special files for two PCMCIA modem cards, use
the following command:
# ./MAKEDEV ace2 ace3
MAKEDEV: special file(s) for ace2: tty02 MAKEDEV: special file(s) for ace3: tty03
The generated special files should look like this:
crw-rw-rw- 1 root system 35, 2 Oct 27 14:02 tty02 crw-rw-rw- 1 root system 35, 3 Oct 27 14:02 tty03
Stop system activity by using the
shutdown
command and then turn off the processor.
See
Chapter 2
for
more information.
Power up the machine. To ensure that all the devices are seen by the system, power up the peripherals before powering up the system box.
Boot the system with the new kernel. See Chapter 2 for information on booting your processor.
5.7 Using Device Commands and Utilities
The preceding sections described generic hardware management tools that
you use to manage many aspects of all devices, such as the
hwmgr
command described in
Section 5.4.
The following
sections describe hardware management tools that are targeted at a particular
kind of device and perform a specific task.
The topics covered in these sections
are:
Finding device utilities
Using SCSI utilities
Disk partitioning
Copying disks
Monitoring disks
5.7.1 Finding Device Utilities
Many of the device utilities are documented elsewhere
in this guide or at other locations in the documentation set.
For example,
utilities that enable you to configure network devices are documented in detail
in the
Network Administration: Services
guide.
Table 5-4
provides references
to utilities documented in the guides, including those listed in this chapter.
Other utilities are documented in reference pages.
Table 5-5
provides references to utilities documented in the reference pages and also
provides pointers to reference data such as the Section 7 interface reference
pages.
Table 5-4: Device Utilities Documented in the Guides
Device | Task | Location |
Processor | Starting or Stopping | Chapter 2 |
Sharing Resources | Chapter 3, Class Scheduler | |
Monitoring | Chapter 3 and Chapter 12 (Environmental) | |
Power Management | Chapter 3,
dxpower |
|
Testing Memory | Chapter 12 | |
Error and Event Handling | Chapter 12 and Chapter 13 | |
SCSI buses | Advanced Configuration and Management | Section 5.7.2.1,
scu |
Disks | Partitioning | diskconfig ,
disklabel |
Copying | Section 5.7.5,
dd |
|
Monitoring Usage | Section 5.7.6,
df
and
du |
|
Power Management | Chapter 3 | |
File Systems Status | Chapter 6 | |
Testing and Exercising | Chapter 12 | |
Tapes (and Disks) | Archiving | Chapter 9 |
Testing and Exercising | Chapter 12 | |
Clock | Setting | Chapter 2 |
Modem | Configuring | Chapter 1 |
Table 5-5: Device Utilities Documented in the Reference Pages
Device | Task | Location |
Devices (General) | Configuring |
hwmgr (8),
devswmgr (8),
dsfmgr (8) |
Device Special Files |
kmknod (8)
,
mknod (8),
MAKEDEV (8),
dsfmgr (8) |
|
Interfaces |
atapi_ide (7),
devio (7),
emx (7) |
|
Processor | Starting and Stopping |
halt (8),
psradm (8),
reboot (2). |
Allocating CPU Resources |
class_scheduling (4),
processor_sets (4),
runon (1) |
|
Monitoring |
dxsysinfo (8),
psrinfo (1). |
|
SCSI buses | Managing |
sys_attrs_cam (5),
ddr.dbase (4)
ddr_config (8) |
Disks | Partitioning |
diskconfig (8),
disklabel (4),
disklabel (8),
disktab (4) |
Monitoring |
dxsysinfo (8)
,
diskusg (8),
acctdisk (8),
df (1),
du (1),
quota (1). |
|
Testing and Maintenance |
diskx (8),
zeero (8) |
|
Interfaces |
ra (7),
radisk (8),
ri (7),
rz (7) |
|
Swap Space |
swapon (8). |
|
Tapes (and Disks) | Archiving |
bttape (8),
dxarchiver (8),
rmt (8). |
Testing and Maintenance |
tapex (8) |
|
Interfaces |
tz (7),
mtio (7),
tms (7) |
|
Floppy | Tools |
dxmtools (1),
mtools (1). |
Testing and Maintenance |
fddisk (8) |
|
Interfaces |
fd (7) |
|
Terminals, Ports | Interfaces |
ports (7) |
Modem | Configuring |
chat (8) |
Interfaces |
modem (7) |
|
Keyboard, Mouse | Interfaces |
dc (7),
scc (7) |
See
Appendix A
for a list of the utilities provided
by SysMan.
5.7.2 SCSI and Device Driver Utilities
The following sections describe utilities that you use to manage SCSI
devices and device drivers.
5.7.2.1 Using the SCSI Configuration Utility, scu
The SCSI/CAM Utility Program,
scu
, provides commands
for advanced maintenance and diagnostics of SCSI peripheral devices and the
CAM I/O subsystem.
For most daily operations, you use the
hwmgr
command.
The
scu
program has an extensive help feature
that describes its options and conventions.
See
scu
(8)
for detailed information
on using this command.
You can use
scu
to:
Format disks
Reassign a defective disk block
Reserve and release a device
Display and set device and program parameters
Enable and disable a device
DSA Disks
For Digital Storage Architecture (DSA) disks, use the
radisk
program. Seeradisk
(8) for information.
Examples of
scu
usage are:
# scu scu> set nexus bus 0 target 0 LUN 0 Device: RZ1CB-CA, Bus: 0, Target: 0, LUN: 0, Type: Direct Access scu> show capacity Disk Capacity Information: Maximum Capacity: 8380080 (4091.836 megabytes) Block Length: 512 scu> show scsi status 0 SCSI Status = 0 = SCSI_STAT_GOOD = Command successfully completed
5.7.2.2 Using the Device Switch Manager, devswmgr
The
devswmgr
command enables you to you manage the device switch table
by displaying information about the device drivers in the table.
You can
also use the command to release device switch table entries.
Typically, you
release the entries for a driver after you have unloaded the driver and do
not plan to reload it later.
Releasing the entries frees them for use by
other device drivers.
Examples of
devswmgr
usage for device data are:
# devswmgr -display device switch database read from primary file device switch table has 200 entries # devswmgr -getnum
Device switch reservation list (*=entry in use) driver name instance major ----------------------- -------- ----- pfm 1 71* fdi 2 58* xcr 2 57 kevm 1 56* cam_disk 2 55* emx 1 54 TMSCP 2 53 MSCP 2 52 xcr 1 44 LSM 4 43 LSM 3 42 LSM 2 41* LSM 1 40* ace 1 35* parallel_port 1 34* cam_uagt 1 30 MSCP 1 28 TMSCP 1 27 scc 1 24 presto 1 22 cluster 2 21* cluster 1 19* fdi 1 14* cam_tape 1 9 cam_disk 1 8* pty 2 7 pty 1 6 tty 1 1 console 1 0
5.7.3 Partitioning Disks Using diskconfig
The Disk Configuration graphical user interface (diskconfig
) enables you to perform the following tasks:
Display attribute information for existing disks
Modify disk configuration attributes
Administer disk partitions
See
diskconfig
(8)
for information on invoking the Disk
Configuration GUI (diskconfig
).
An online help volume describes
how you use the graphical interface.
See
disklabel
(8)
for information on
command options.
The Disk Configuration GUI provides a graphical interface to several disk maintenance tasks that you can perform manually, by using the following commands:
disklabel
-
Use this command to install, examine, or modify the label on a disk drive
or pack.
The disk label contains information about the disk, such as type,
physical parameters, and partitioning.
See also the
/etc/disktab
file, described in
disklabel
(4).
newfs
- This command creates a new UFS file system on the specified
device.
You cannot use the
newfs
command to create Advanced
File System (AdvFS) domains.
Instead, use the
mkfdmn
command,
as described in
mkfdmn
(8).
mkfdmn
and
mkfset
- Use these commands
to create Advanced File System (AdvFS) domains and filesets.
An example of using manual methods is provided in Section 5.7.4.
Invoke the Disk Configuration interface as follows:
At the system prompt, type
diskconfig
.
From the CDE Front Panel, SysMan Applications pop-up menu, choose Configuration. Then select the Disk icon from the SysMan Configuration folder.
Caution
Disk Configuration displays appropriate warnings when you attempt to change partition sizes. However, you should plan the changes in advance to ensure that you do not overwrite any required data. Back up any data partitions before attempting this task.
A window titled Disk Configuration on hostname is displayed. This is the main window for DiskConfig, and lists the following information for each disk:
The disk basename, such as
dsk10
.
See
Section 5.5
for information on disk names.
The device model, such as
RZ1CB-CA
The physical location of the device, specifying Bus, Target and LUN (logical unit number). See Section 5.4 for information on the device location.
Select a device by double-clicking on the list item (or press configure when a disk is highlighted) . The following windows are displayed:
This window provides the following information and options:
A graphical representation of the disk partitions, in a horizontal bar-chart format. The currently-highlighted partition is a different color, and the details of that partition are displayed in the Selected Partition box. You can use the bar chart handles (or flags) to change the partition sizes. Position the cursor as follows:
On the center handle to change both adjacent partitions
On the top flag to move up the start of the right hand partition
On the bottom flag to move down the end of the left-hand partition
Press MB1 and drag the mouse to move the handles.
A pull-down menu that enables you to toggle the sizing information between megabytes, bytes, and blocks.
A statistics box, which displays disk information such as the device name, the total size of the disk, and usage information. This box enables you to assign or edit the disk label, and create an alias name for the device.
The Selected Partition box, which displays dynamic sizes for the selected partition. These sizes are updated as you change the partitions by using the bar-chart. You can also type the partition sizes directly into these windows to override the current settings. This box also enables you to select the file system for the partition and, if using AdvFS, the domain name and fileset name.
The Disk Attributes... option.
This button displays some of the physical attributes of the device.
The Partition Table... option, which is described in the following item.
This window displays a bar-chart of the current partitions in use, their sizes, and the file system in use. You can toggle between the current partition sizes, the default table for this device, and the original (starting table) when this session started. If you make errors on a manual partition change, you can use this window to reset the partition table.
Refer to the online help for more information on these windows.
After making partition adjustments, use the SysMan Menu options to mount any newly created file systems as follows:
Invoke the SysMan Menu, as described in Chapter 1
Expand the Storage options, and select Basic File System Utilities - Mount File Systems
In the Mount Operation window, select the option to mount a specific file system and press Next
In the Name and Mount Point window:
Type a mount point, such as
/usr/newusers
Type the partition name, such as
/dev/disk/dsk0g
or a domain name, such as
newusr_domain#usr
.
Your new file system is now accessible.
5.7.4 Manually Partitioning Disks
This section provides the information you need to change the partition scheme of your disks. In general, you allocate disk space during the initial installation or when adding disks to your configuration. Usually, you do not have to alter partitions; however, there are cases when it is necessary to change the partitions on your disks to accommodate changes and to improve system performance.
The disk label provides detailed information about the geometry of the
disk and the partitions into which the disk is divided.
You can change the
label with the
disklabel
command.
You must be the root
user to use the
disklabel
command.
There are two copies of a disk label, one located on the disk and one
located in system memory.
Because it is faster to access system memory than
to perform I/O, when the system boots, it copies the disk label into memory.
Use the
disklabel
-r
command
to directly access the label on the disk instead of going through the in-memory
label.
Note
Before you change disk partitions, back up all the file systems if there is any data on the disk. Changing a partition overwrites the data on the old file system, destroying the data.
When changing partitions, remember that:
You cannot change the offset, which is the beginning sector, or shrink any partition on a mounted file system or on a file system that has an open file descriptor.
If you need a single partition on the entire disk, use partition
c
.
Unless it is mounted, you must specify the raw device for
partition
a
, which begins at the start of the disk (sector
0), when you change the label.
If partition
a
is mounted,
you must then use partition
c
to change the label.
Partition
c
must also begin at sector 0.
Caution
If partition
a
is mounted and you attempt to edit the disk label using device partitiona
, you cannot change the label. Furthermore, you do not receive any error messages indicating that the label is not written.
Before changing the size of a disk partition, review
the current partition setup by viewing the disk label.
The
disklabel
command allows you to view the partition sizes.
The bottom, top,
and size of the partitions are in 512-byte sectors.
To review the current disk partition setup, use the following
disklabel
command:
/sbin/disklabel -r device
Specify the device with its directory name
(/dev)
followed by the raw device name, drive number, and partition
a
or
c
.
You can also specify the disk unit and number, such
as
dsk1
.
An example of using the
disklabel
command to view
a disk label follows:
# disklabel -r /dev/rdisk/dsk3a type: SCSI disk: rz26 label: flags: bytes/sector: 512 sectors/track: 57 tracks/cylinder: 14 sectors/cylinder: 798 cylinders: 2570 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0
8 partitions: # size offset fstype [fsize bsize cpg] a: 131072 0 4.2BSD 1024 8192 16 # (Cyl. 0 - 164*) b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*) h: 838444 1212416 4.2BSD 1024 8192 16 # (Cyl. 1519*- 2569*)
Take care when you change partitions
because you can overwrite data on the file systems or make the system inefficient.
If the partition label becomes corrupted while you are changing the partition
sizes, you can return to the default partition label by using the
disklabel
command with the
-w
option, as
follows:
# disklabel -r -w /dev/rdisk/dsk1a rz26
The
disklabel
command allows you to change the partition label of an individual
disk without rebuilding the kernel and rebooting the system.
Use the following
procedure:
Display disk space information about the file systems by using
the
df
command.
View the
/etc/fstab
file to determine if
any file systems are designated as swap space.
Examine the disk's label by using the
disklabel
command with the
-r
option.
(See
rz
(7),
ra
(7),
and
disktab
(4) for information on the default disk partitions.)
Back up the file systems.
Unmount the file systems on the disk whose label you want to change.
Calculate the new partition parameters. You can increase or decrease the size of a partition. You can also cause partitions to overlap.
Edit the disk label by using the
disklabel
command with the
-e
option to
change the partition parameters, as follows:
# /sbin/disklabel -e disk
An editor, either the
vi
editor or that specified
by the EDITOR environment variable, is invoked so you can edit the disk label,
which is in the format displayed with the
disklabel
-r
command.
The
-r
option writes the label directly to
the disk and updates the system's in-memory copy, if possible.
The
disk
parameter specifies the unmounted disk (for example,
dsk0
or
/dev/rdisk/dsk0a
).
After you quit the editor and save the changes, the following prompt is displayed:
write new label? [?]:
Enter
y
to write the new label or
n
to discard the
changes.
Use the
disklabel
command with the
-r
option to view the new disk label.
5.7.4.1 Checking for Overlapping Partitions
Commands to mount or create file systems, add a new
swap device, and add disks to the Logical Storage Manager first check whether
the disk partition specified in the command already contains valid data, and
whether it overlaps with a partition that is already marked for use.
The
fstype
field of the disk label enables you to determine when a partition
or an overlapping partition is in use.
If the partition is not in use, the command continues to execute.
In
addition to mounting or creating file systems, commands such as
mount
,
newfs
,
fsck
,
voldisk
,
mkfdmn
,
rmfdmn
, and
swapon
also modify the disk label, so that the
fstype
field specifies partition usage.
For example, when you add a disk partition
to an AdvFS domain, the
fstype
field is set to
AdvFS
.
If the partition is not available, these commands return an error message and ask if you want to continue, as shown in the following example:
# newfs /dev/disk/dsk8c WARNING: disklabel reports that basename,partition currently is being used as "4.2BSD" data. Do you want to continue with the operation and possibly destroy existing data? (y/n) [n]
Applications, as well as operating system commands, can modify the
fstype
of the disk label, to indicate that a partition is in use.
See
check_usage
(3)
and
set_usage
(3)
for more information.
5.7.5 Copying Disks
You can use the
dd
command to copy a complete disk
or a disk partition; that is, you can produce a physical copy of the data
on the disk or disk partition.
Note
Because the
dd
command is not meant for copying multiple files, copy a disk or a partition only to a disk that you are using as a data disk, or to a disk that does not contain a file system. Use thedump
andrestore
commands, as described in Chapter 9, to copy disks or partitions that contain a UFS file system. Use thevdump
andvrestore
commands, as described in AdvFS Administration, to copy disks or partitions that contain an AdvFS fileset.
UNIX protects the first block of a disk with a valid disk label because this is where the disk label is stored. As a result, if you copy a partition to a partition on a target disk that contains a valid disk label, you must decide whether you want to keep the existing disk label on that target disk.
If you want to maintain the disk label on the target disk, use the
dd
command with the
skip
and
seek
options to move past the protected disk label area on the target disk.
The
target disk must be the same size as or larger than the original disk.
To determine if the target disk has a label, use the
following
disklabel
command:
# /sbin/disklabel -r target_disk
You must specify the target device directory name
(/dev)
followed by the raw device name, drive number, and partition
c
.
If the disk does not contain a label, the following message is displayed:
Bad pack magic number (label is damaged, or pack is unlabeled)
The following example shows a disk that already contains a label:
# disklabel -r /dev/rdisk/dsk1c type: SCSI disk: rz26 label: flags: bytes/sector: 512 sectors/track: 57 tracks/cylinder: 14 sectors/cylinder: 798 cylinders: 2570 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0
8 partitions: # size offset fstype [fsize bsize cpg] a: 131072 0 unused 1024 8192 # (Cyl. 0 - 164*) b: 262144 131072 unused 1024 8192 # (Cyl. 164*- 492*) c: 2050860 0 unused 1024 8192 # (Cyl. 0 - 2569) d: 552548 393216 unused 1024 8192 # (Cyl. 492*- 1185*) e: 552548 945764 unused 1024 8192 # (Cyl. 1185*- 1877*) f: 552548 1498312 unused 1024 8192 # (Cyl. 1877*- 2569*) g: 819200 393216 unused 1024 8192 # (Cyl. 492*- 1519*) h: 838444 1212416 unused 1024 8192 # (Cyl. 1519*- 2569*)
If the target disk already contains a label and you do not want to keep
the label, you must clear the label by using the
disklabel
-z
command.
For example:
# disklabel -z /dev/rdisk/dsk1c
To copy the original disk to the target disk and keep the target disk
label, use the
dd
command, specifying the device directory
name
(/dev)
followed by the raw device name, drive number,
and the original and target disk partitions.
For example:
# dd if=/dev/rdisk/dsk0c of=/dev/rdisk/dsk1c \ skip=16 seek=16 bs=512k
To ensure an adequate amount of free disk space, regularly monitor the disk use of your configured file systems. You can do this in any of the following ways:
Check available free space by using the
df
command
Check disk use by using the
du
command
or the
quot
command
Verify disk quotas (if imposed) by using the
quota
command
You can use the
quota
command only if you are the
root user.
5.7.6.1 Checking Available Free Space
To ensure sufficient space for your configured
file systems, use the
df
command regularly to check the
amount of free disk space in all of the mounted file systems.
The
df
command displays statistics about the amount of free disk space
on a specified file system or on a file system that contains a specified file.
With no arguments or options, the
df
command displays
the amount of free disk space on all of the mounted file systems.
For each
file system, the
df
command reports the file system's configured
size in 512-byte blocks, unless you specify the
-k
option, which reports the size in kilobyte blocks.
The command displays the
total amount of space, the amount in use, the amount available (free), the
percentage in use, and the directory on which the file system is mounted.
For AdvFS file domains, the
df
command displays disk
space usage information for each fileset.
If you specify a device that has no file systems mounted on it,
df
displays the information for the root file system.
You can specify a file path name to display the amount of available disk space on the file system that contains the file.
See
df
(1)
for more information.
Note
You cannot use the
df
command with the block or character special device name to find free space on an unmounted file system. Instead, use thedumpfs
command.
The following example displays disk space information about all the mounted file systems:
# /sbin/df Filesystem 512-blks used avail capacity Mounted on /dev/disk/dsk2a 30686 21438 6178 77% / /dev/disk/dsk0g 549328 378778 115616 76% /usr /dev/disk/dsk2g 101372 5376 85858 5% /var /dev/disk/dsk3c 394796 12 355304 0% /usr/users /usr/share/mn@tsts 557614 449234 52620 89% /usr/share/mn domain#usr 838432 680320 158112 81% /usr
Note
The
newfs
command reserves a percentage of the file system disk space for allocation and block layout. This can cause thedf
command to report that a file system is using more than 100 percent of its capacity. You can change this percentage by using thetunefs
command with the-minfree
flag.
If
you determine that a file system has insufficient space available, you might
want to find out who is using the space.
You can do this with the
du
command or the
quot
command.
The
du
command returns disk space allocation by directory.
With this information you can decide who is using the most space and who
should free up disk space.
The
du
command displays the number of blocks contained
in all directories (listed recursively) within each specified directory, file
name, or (if none are specified) the current working directory.
The block
count includes the indirect blocks of each file in 1-kilobyte units, independent
of the cluster size in use by the system
If you do not specify any options, an entry is generated for each directory.
See
du
(1)
for more information on command options.
The following example displays a summary of blocks that all main subdirectories
in the
/usr/users
directory use:
# /usr/bin/du -s /usr/users/* 440 /usr/users/barnam 43 /usr/users/broland 747 /usr/users/frome 6804 /usr/users/morse 11183 /usr/users/rubin 2274 /usr/users/somer
From this information, you can determine that user Rubin is using the most disk space.
The following example displays the space that each file and subdirectory
in the
/usr/users/rubin/online
directory uses:
# /usr/bin/du -a /usr/users/rubin/online 1 /usr/users/rubin/online/inof/license 2 /usr/users/rubin/online/inof 7 /usr/users/rubin/online/TOC_ft1 16 /usr/users/rubin/online/build . . . 251 /usr/users/rubin/online
Note
As an alternative to the
du
command, you can use thels -s
command to obtain the size and usage of files. Do not use thels -l
command to obtain usage information;ls -l
displays file sizes only.
You can use the
quot
command to list the number of
blocks in the named file system currently owned by each user.
You must be root user to use the
quot
command.
The following example displays the number of blocks that each user owns
and the number of files owned by each user in the
/dev/disk/dsk0h
file system:
# /usr/sbin/quot -f /dev/disk/dsk0h
Note
You must specify the character device special file to return the information, because when the device is mounted the block special device file is busy.
See
quot
(8)
for more information.