5    Hardware Management

This chapter describes the utilities available to assist you in administering the system hardware, which consists of the CPUs and all associated devices The utilities work on single systems and on systems joined into clusters. Hardware management involves viewing the status of system devices and performing administrative options on them if necessary. This includes adding and removing devices, troubleshooting any devices that are not working, and monitoring devices to prevent problems before they occur.

You may also need to administer the software that is associated with devices, such as drivers, kernel pseudodevices and device special files. This software enables devices to communicate and transfer data between system components. Information on administering the related software components is included in this chapter.

Most operations require root user privileges, however you can assign such privileges to nonroot users using the SysMan division of privileges (DOP) feature. See the dop(8) reference page for more information.

This chapter contains the following sections:

5.1    Understanding Hardware

A hardware device can be any part, or component, of a system. The system is organized in a hierarchy with the CPUs at the top, and discrete devices such as disks and tapes at the bottom. This is sometimes also referred to as the system topology. The following components are typical of the device hierarchy of most computer systems although it is not a definitive list:

Hardware management involves understanding how all the components relate to each other, how they are logically and physically located in the system topology, and how the system software recognizes and communicates with components. To better understand the component hierarchy of a system, refer to Chapter 1 for an introduction to the SysMan Station. This is a graphical utility that displays topological views of the system component hierarchy and allows you to manipulate such views.

Fortunately, the vast majority of hardware management is automated. When you add a device such as a SCSI disk to a system and reboot the system, it will find the device and recognize it, building in any device drivers that it needs. The system will automatically create the software components for that disk as device special files. It only remains for the administrator to partition the disk as needed and create a file system on the partitions (described in Chapter 6) before it can be used to store data. However, you will periodically need to perform some tasks manually, such as when a disk crashes and you need to bring a duplicate device online at the same logical location. You may also need to manually add devices to a running system or redirect the I/O for one disk to another disk. This chapter focuses on these manual tasks.

Many other hardware management tasks are part of regular system operations and maintenance, such as repartitioning a disk or adding an adapter to a bus. Often, such tasks are fully described in the hardware documentation that accompanies the device itself, but you will often need to perform tasks such as checking the system for the optimum (or preferred) physical and logical locations for the new device.

Another important aspect of hardware management is preventative maintenance and monitoring. You should be aware of the following operating system features that can facilitate a healthy system environment:

The organization of this chapter reflects the hardware and software components that you manage as follows:

Another way to think of this is that with a generic utility you can perform a task on many devices, while with a targeted utility you can only perform a task on a single device. Note that unless stated, most operations can be performed on a single system or a cluster. You should refer to the TruCluster documentation for additional information on managing cluster hardware.

5.1.1    Logical Storage Manager

The Logical Storage Manager (LSM) consists of physical disk devices, logical entities, and the mappings that connect both. LSM builds virtual disks, called volumes, on top of UNIX system disks. A volume is a special device that contains data managed by a UNIX file system, a database, or other application. LSM transparently places a volume between a physical disk and an application, which then operates on the volume rather than on the physical disk. A file system, for instance, is created on the LSM volume rather than a physical disk.

The LSM software maps the logical configuration of the system to the physical disk configuration. This is done transparently to the file systems, databases, and applications above it because LSM supports the standard block device and character device interfaces to store and retrieve data on LSM volumes. Thus, you do not have to change applications to access data on LSM volumes.

Refer to the manual Logical Storage Manager for more complete information on LSM concepts and commands.

5.2    Reference Information

The following sections contain reference information related to documentation, system files and other utilities. Some utilities described here are obsolete and will be removed in a future release. Consult the Release Notes for a list of utilities that are scheduled for retirement. If you are using one of these utilities, you should migrate to its replacement as soon as possible. Check your site-specific shell scripts for any calls that may invoke an obsolete utility.

5.2.1    Related Documentation

The following documentation contains information hardware management:

Note that most command line and graphical utilities also provide extensive online help.

5.2.2    Identifying Hardware Management System Files

The following system files contain static or dynamic information that is used to configure the device into the kernel. You should not edit these files manually even if they are ASCII text files. Some files may be Context Dependent Symbolic Links, as described in Chapter 6. If the links are accidentally broken, the files may not be usable in a clustered environment until the links are re-created.

5.2.3    WWIDs and Shared Devices

SCSI device naming is based on the logical identifier (ID) of a device. This means that the device special filename has no correlation to the physical location of a SCSI device. UNIX uses information from the device to create an identifier called a world-wide identifier, which is usually written as WWID.

Ideally, the WWID for a device is unique, enabling the identification of every SCSI device attached to the system. However, some legacy devices (and even some new devices available from third-party vendors) do not provide the information required to create a unique WWID for a specific device. For such devices, the operating system will attempt to generate a WWID, and in the extreme case will use the device nexus (the SCSI bus/target/lun) to create a WWID for the device.

Consequently, devices that do not have a unique WWID should not be used on a shared bus. If a device that does not have a unique WWID is put on a shared bus, a different device special file will be created for each different path to the device. This can lead to data corruption if two different device special files are used to access the device at the same time. To determine if a device has a cluster unique WWID, use the following command:


#  hwmgr -show components

If a device has the c flag set in the FLAGS field, then it has a cluster-unique WWID and can be placed on a shared bus. Such devices are cluster-shareable because they can be put on a shared bus within a cluster.

Note

An exception to this rule are HSZ devices. Although an HSZ device might be marked as cluster shareable some firmware revisions on the HSZ preclude multi-initiators from probing the device at the same time. Refer to the owners manual for the HSZ device and check the Release Notes for any current restrictions.

The following example displays all the hardware components of category disk that have cluster-unique WWIDs:

# hwmgr -show comp -cat disk -cs
HWID: HOSTNAME FLAGS SERVICE COMPONENT NAME
-----------------------------------------------
35:   pmoba    rcd-- iomap   SCSI-WWID:0410004c:"DEC  RZ28     ..."
36:   pmoba    -cd-- iomap   SCSI-WWID:04100024:"DEC  RZ25F    ..."
42:   pmoba    rcd-- iomap   SCSI-WWID:0410004c:"DEC  RZ26L    ..."
43:   pmoba    rcds- iomap   SCSI-WWID:0410003a:"DEC  RZ26L    ..."
48:   pmoba    rcd-- iomap   SCSI-WWID:0c000008:0000-00ff-fe00-0000
49:   pmoba    rcd-- iomap   SCSI-WWID:04100020:"DEC  RZ29B    ..."
50:   pmoba    rcd-- iomap   SCSI-WWID:04100026:"DEC  RZ26N    ..."

In some rare cases you may have a device that does not supply a unique WWID but you have a requirement that it must be available on a shared bus. Using such devices on a shared bus is not recommended but there is a manual command that will allow you to set up one of these devices to be used on a shared bus. See Section 5.4.4.10 for a description of how to use the hwmgr -edit scsi command option.

5.2.4    Related Utilities

The following utilities are also available for use in managing devices:

5.3    Using the SysMan Hardware Utilities

The SysMan Menu Hardware branch provides utilities for hardware management. You can also use the SysMan Station to obtain information about hardware devices and to launch hardware management utilities.

The SysMan utilities provide you with a subset of the many more hardware management features available from the command line when you use the hwmgr command. A more detailed discussion of the hwmgr command and its options can be found in Section 5.4. See also the hwmgr(8) reference page for a complete listing of the command syntax and options. Selecting the help option in one of the SysMan Menu hardware tasks will invoke the appropriate reference pages.

When you invoke the SysMan Menu as described in Chapter 1, hardware management options are available under the Hardware branch of the menu. Expanding this branch displays the following tasks:

These tasks launch SysMan Menu utilities that are described in the following sections. The first three utilities run instances of the /sbin/hwmgr command to obtain and display system data. Note that the utilities provide a method of finding the data that you use when specifying hardware management operations on system components. For example, finding out which disks are on which SCSI buses.

The following option buttons (or choices, in a terminal) are available in all the utilities:

5.3.1    Viewing the Hardware Hierarchy

The View hardware hierarchy task invokes the command /sbin/hwmgr -view hierarchy, directing the output to the SysMan Menu window (or screen, if a terminal). The following example shows output from a single-CPU system that is not part of a cluster:

                 View hardware hierarchy
 
 HWID:  hardware component hierarchy
 ---------------------------------------------------
    1:  platform AlphaServer 800 5/500
    2:    cpu CPU0
    4:    bus pci0
    5:      connection pci0slot5
   13:        scsi_adapter isp0
   14:          scsi_bus scsi0
   30:            disk bus-0-targ-0-lun-0 dsk0
   31:            disk bus-0-targ-4-lun-0 cdrom0
    7:      connection pci0slot6
   15:        graphics_controller trio0
    9:      connection pci0slot7
   16:        bus eisa0
   17:          connection eisa0slot9
   18:            serial_port tty00
   19:          connection eisa0slot10
   20:            serial_port tty01
   21:          connection eisa0slot11 
   22:            parallel_port lp0
   23:          connection eisa0slot12
   24:            keyboard PCXAL
   25:            pointer PCXAS
   26:          connection eisa0slot13
   27:            fdi_controller fdi0
   28:              disk fdi0-unit-0 floppy0
   11:      connection pci0slot11
   29:        network tu0

Use this task to display the hardware hierarchy for the entire system or cluster. The hierarchy shows every bus, controller, and device on the system from the CPUs down to the individual peripheral devices such as disks and tapes. On a system or cluster that has many devices, the output can be lengthy and you may need to scroll the display to see devices at the beginning of the output.

The output is useful because it provides you with information that is used in many hwmgr command options to perform hardware management operations such as viewing more device detail and adding or deleting devices. The following items shown in the hierarchy can be used as command input:

Note that because the same device might be shared (for example, on a shared bus) it may appear in the hierarchy more than once and will have a unique identifier each time it appears. An example of this is given in Section 5.4.4.7.

You can use the information from the -view hierarchy command output in other hwmgr commands when you want to focus an operation on a specific hardware component, as shown in the following command, which gets the value of a device attribute named device_starvation_time for the device with the HWID (id) of 30. Device 30 is the disk device at bus 0, target 0 and lun 0 in the example hierarchy:

# /sbin/hwmgr -get attr -id 30 -a device_starvation_time
30:
  device_starvation_time = 25 (settable)

The output shows that the value of the device_starvation_time attribute is 25. The label (settable) indicates that this is a configurable attribute that you can set using the following command option:

# /sbin/hwmgr -set attr

5.3.2    Viewing the Cluster

Selecting the View cluster task invokes the command /sbin/hwmgr -view cluster, directing the output to the SysMan Menu window (or screen, if a terminal) as follows:

                                  View cluster
 
Starting /sbin/hwmgr -view cluster ...
 
/sbin/hwmgr -view cluster run at Fri May 21 13:42:37 EDT 1999
 
 
          Member ID          State           Member HostName
          ---------          -----           ---------------
               1              UP                   rene (localhost)
              31              UP                   witt
              10              UP                   rogr

If you attempt to run this command on a system that is not a member of a cluster, the following message is displayed instead of the system listing:

hwmgr: This system is not a member of a cluster.

The Member ID and the HostName can be specified in some hwmgr commands when you want to focus an operation on a specific member of a cluster, as shown in the following example:

# /sbin/hwmgr -scan scsi -member witt

5.3.3    Viewing Device Information

Selecting the View device information task invokes the command /sbin/hwmgr -view devices, directing the output to the SysMan Menu window (or screen, if a terminal).

Use this option to display the device information for the entire system or cluster. The output shows every device and pseudodevice (such as /dev/kevm) on the system. The following example shows the output from a small single-CPU system that is not part of a cluster:

                            View device information
 
Starting /sbin/hwmgr -view devices ...
 
/sbin/hwmgr -view devices run at Fri May 21 14:20:08 EDT 1999
 
HWID:  Device Special File  Mfg Model           Location
                      Name
------------------------------------------------------------------
 3:            /dev/kevm
28:   /dev/disk/floppy0c        3.5in floppy    fdi0-unit-0
30:      /dev/disk/dsk0c   DEC  RZ1DF-CB(C)DEC  bus-0-targ-0-lun-0
31:    /dev/disk/cdrom0c   DEC  RRD47  (C)DEC   bus-0-targ-4-lun-0

For the purpose of this command, a "device" is considered to be any entity in the hierarchy that has the attribute dev_base_name and as such will have an associated device special file (DSF). The output from this utility provides the following information which can be used with the hwmgr command to perform hardware management operations on the device:

You can specify this information to certain hwmgr commands to perform hardware management operations on a particular device. The following example of disk location specifies a device special file for a disk, causing the light (LED) on that disk to flash for 30 seconds. This tells you exactly which device special file is associated with that disk.

# /sbin/hwmgr -flash light -dsf /dev/disk/dsk0c

5.3.4    Viewing CPU Information

Selecting the View central processing unit (CPU) information task invokes the command /usr /sbin/psrinfo -v, directing the output to the SysMan Menu window (or screen, if a terminal). Use this option to display the CPU status information, as shown in the following sample output for a single-processor system.

The output from this utility describes the processor and tells you how long it has been running, as follows:

                               /usr/sbin/psrinfo
Starting /usr/sbin/psrinfo -v ...
 
/usr/sbin/psrinfo -v run at Fri May 21 14:22:05 EDT 1999
 
Status of processor 0 as of: 05/21/99 14:22:05 
  Processor has been on-line since 05/15/1999 14:42:28
   The alpha EV5.6 (21164A) processor operates at 500 MHz,
    and has an alpha internal floating point processor.

5.3.5    Using the SysMan Station

The SysMan Station is a graphical utility that runs under various windowing environments or from a web browser. Refer to Chapter 1 and the online help for information on launching and using the SysMan Station.

Features of the SysMan Station that assist you in hardware management are as follows:

Monitoring systems and devices

The SysMan Station provides a live view of system and component status. You can customize views to focus on parts of a system or cluster that are of most interest to you. You will be notified when a hardware problem occurs on the system. System views are hierarchical, showing the complete system topology from CPUs down to discrete devices such as tapes. You can observe the layout of buses, controllers, and adapters and see their logical addresses. You can see what devices are attached to each bus or controller, and their slot numbers. Such information is useful for running hwmgr commands from the command prompt.

Viewing device properties (or attributes)

You can select a device and view detailed attributes of that device. For example, if you select a SCSI disk device and press the right mouse button, a menu of options is displayed. You can choose to view the device properties for the selected disk. If you opt to do this, an extensive table of device properties will be displayed. This action is the same as using the hwmgr command, as shown in the following (truncated) sample output:


# hwmgr -get attr -id 30
30:
  name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2
  category = disk
  sub_category = generic
  architecture = SCSI
  phys_location = bus-0-targ-0-lun-0
  dev_base_name = dsk0
  access = 7
  capacity = 17773524
  block_size = 512
  open_part_mask = 59
  disk_part_minor_mask = 4294967232
  disk_arch_minor_mask = 4290774015
<display truncated>

Launching hardware management utilities

When you select a device, you can also choose to launch a utility and perform configuration or daily administrative operations on the selected device. For example, if you select a network adapter, you can configure its settings or perform related tasks such as configure the domain name server (DNS). You can launch the Event Viewer to see if any system events (such as errors) pertaining to this device have been recently posted.

Note that you can also run the SysMan Station from within Insight Manager and use it from a PC, enabling you to remotely manage system hardware. Refer to Chapter 1 for more information on remote management options.

5.4    Using hwmgr to Manage Hardware

The principal generic utility used for managing hardware is the hwmgr command line interface (CLI). Other utilities, such as the SysMan utilities only provide a limited subset of the features provided by hwmgr. For example, you can use hwmgr to set an attribute for all devices of a particular type (such as SCSI disks) on all SCSI adapters in all members of a cluster.

Most hardware management is performed automatically by the system and you need only intervene under certain circumstances, such as replacing a failed device so that the replacement device takes on the identity of the failed device. The following sections provide information on:

5.4.1    Understanding the Hardware Management Model

Within the operating system kernel, hardware data is organized as a hardware set managed by the kernel set manager (KSM). Application requests are passed by library routines to KSM kernel code, or remote code. The latter deals with requests to and from other systems. The hardware component module (HWC) resides in the kernel, and contains all the registration routines to create and maintain hardware components in the hardware set. It also contains the device nodes for device special file management, which is performed using dsfmgr.

The hardware set consists of data structures that describe all of the hardware components that are part of the system. A hardware component (or device) becomes part of the hardware set when registered by its driver. Devices have various attributes that describe their function and content. Each attribute is assigned a value. You can read or manipulate these attribute values using hwmgr.

Hardware management using the hwmgr utility is organized into three parts, referred to as subsystems by the hwmgr utility. The subsystems are identified as component, scsi and name. The subsystems are related to the system hardware databases as follows:

The specific features of hwmgr are as follows:

5.4.2    Understanding hwmgr Command Options

The hwmgr utility works with the KSM hardware set and the kernel hardware management module, providing you with the ability to manage hardware components. A hardware component can be a storage peripheral, such as a disk or tape, or a system component such as a CPU or a bus. Use the hwmgr utility to manage hardware components on either a single system or on a cluster.

The hwmgr utility provides two types of commands, internal and generic. Internal commands do not specify a subsystem identifier on the command line. Generic commands are characterized by a subsystem identifier after the command name.

5.4.2.1    Using Generic Hardware Manager Commands

Generic hwmgr commands have the following synopsis:

/sbin/hwmgr [component | name | scsi] [parameter]

Refer to the hwmgr(8) reference page and use the -help command option to obtain information on the command syntax, as shown in the following example:

# hwmgr -help component

Note that some hwmgr commands are duplicated in more than one subsystem and not all commands are usable across all subsystems. You should use the subsystem most closely associated with the type of operation you want to perform. The following are examples of commands. Refer to the hwmgr(8) reference page for a definitive list of commands and for additional examples.

5.4.2.2    Using Internal Hardware Manager Commands

Internal hwmgr commands have the following typical synopsis:

/sbin/hwmgr -get attribute [saved | default| current] [-a attribute] [-a attribute=value] [-a attribute!=value] [-id hardware-component-id] [-categoryhardware-category] [-membercluster-member-name] [-cluster]

The -get attribute command option is only one of many command options available. Obtain a complete listing of command options using the following command:

# /sbin/hwmgr -help

Refer also to the hwmgr(8) reference page for a complete list of supported command combinations, and optional flags. Examples of commands are shown in the following list:

Section Section 5.4.4 contains examples of how you use hwmgr to perform administrative tasks.

5.4.3    Configuring the hwmgr Environment

The hwmgr utility has some environment settings that you can use to control the amount of information displayed. The settings of the environment can be viewed using the following command, which displays the system default settings:

# hwmgr -view env
 
  HWMGR_DATA_FILE = "/etc/hwmgr/hwmgr.dat"
  HWMGR_DEBUG = FALSE
  HWMGR_HEXINTS = FALSE
  HWMGR_NOWRAP = FALSE
  HWMGR_VERBOSE = FALSE

As for other environment variables, you can set the values in login script, or at the command line as shown in the following example:


# HWMGR_VERBOSE=TRUE
# export HWMGR_VERBOSE

You usually only need to define the HWMGR_HEXINTS HWMGR_NOWRAP, and the HWMGR_VERBOSE values as follows:

5.4.4    Using hwmgr to Manage Hardware

The following sections contain examples of tasks that you may need to perform using hwmgr. Some of these examples may not be useful for managing a small server with a few peripheral devices. However, when managing a large installation with many networked systems or clusters with hundreds of devices they become very useful. Using hwmgr enables you to connect to an unfamiliar system, obtain information about its device hierarchy, and then perform administrative tasks without any previous knowledge about how the system is configured and without consulting system logs or files to find devices.

5.4.4.1    Locating SCSI Hardware

On systems with many SCSI peripherals, it can often be difficult to identify a particular device and associate that device with its logical address or device special file. The -flash light option, which currently only works for some SCSI devices, enables you to identify a device. This option has the following syntax:

/sbin/hwmgr -flash light [-dsfdevice-special-file] [-busN] [-targetN] [-lunN] [-secondsnumber] [-nopause]

You might use this command when you are trying to physically locate a SCSI disk. For example, a service engineer has arrived and asks where the system root disk is located. You know from your /etc/fstab file that you are using /dev/disk/dsk4a as your root device, but you do not know where that disk is physically located. The following command will flash the LED (light emitting diode) light on the root device for a minute:

# /sbin/hwmgr -flash light -dsf dsk4 -seconds 60

You can then check the disk bays for the device that is flashing its light.

The LED on the device may be the same LED that is used to indicate normal disk I/O activity (reads and writes). If there is much activity on all the disks, it may not be easy to see which disk is flashing. In this case, you can specify the -nopause option. Using -nopause will cause the target disk to turn on the LED constantly for the determined time ( the default is 30 seconds). This option is also very useful on SCSI RAID devices where you have more than one disk contained in a RAIDSET and you want all of the disks to turn on their LEDs.

See also the -locate component option.

5.4.4.2    Viewing the System Hierarchy

The -view command can be used to view the hierarchy of hardware within a system. Use this command to find what adapters are controlling devices, and discover where adapters are installed on buses. The command syntax is as follows:

/sbin/hwmgr -view hierarchy [-idhardware_component_id] [-instanceinstance_number]

The following example shows typical output on a small system that is not part of a cluster:

# hwmgr -view hier
 
HWID: Hardware component hierarchy
  ----------------------------------
   147:  platform DEC 3000 - M400
     2:    cpu CPU0
   148:    bus tc0
   149:      connection tc0slot7
     6:        serial_port tty00
     7:        serial_port tty01
   150:        keyboard LK401
   151:        pointer VSXXXAA
   154:        network ln0
   153:      connection tc0slot6
   152:        scsi_adapter tcds0
   155:      connection tc0slot0
   156:        graphics_controller fb0

Note that some devices may appear as multiple entries in the hierarchy. For example, if a disk is on a SCSI bus that is shared between two adapters, the hierarchy will show two entries for the same device.

You can obtain similar views of the system hardware hierarchy using the SysMan Station.

5.4.4.3    Viewing System Categories

To perform hardware management options on all devices of the same category, or to select a particular device in a category, you may need to know what categories of devices are available. The hardware manager -get category command fetches all the possible values for hardware categories, and has the following syntax:

/sbin/hwmgr -get category

This command is useful when used in conjunction with the -get/-set attributes options, which you can use to display and configure the attributes (or properties) of a particular device. Once you know the hardware categories you can limit your attribute queries to a specific type of hardware.

The command produces output similar to the following:

  Hardware Categories
  -------------------
  category = undefined
  category = platform
  category = cpu
  category = pseudo
  category = bus
  category = connection
  category = serial_port
  category = keyboard
  category = pointer
  category = scsi_adapter
  category = scsi_bus
  category = network
  category = graphics_controller
  category = disk
  category = tape

Your attribute query can then be focused as follows:

# hwmgr -get attr -category platform
  1:
   name = DEC 3000 - M400
   category = platform

This output informs you that the system platform has a hardware ID of 1, and that the platform name is DEC 3000 - M400. See also the -get attribute and -set attribute options.

5.4.4.4    Obtaining Component Attributes

Any device driver that controls a hardware device will register and maintain the KSM (kernel set manager) attributes for that component. Attributes are characteristics of the device that may simply be information, such as the model number of the device, or they may control some aspect of the behavior of the device, such as the speed at which it operates.

The -get attribute command fetches and displays KSM (kernel set manager) attributes for a component. The hardware manager utility is specific to managing hardware and only fetches KSM attributes only from the hardware set. All hardware components are identified by KSM using a unique hardware identifier, otherwise known as the hardware ID or HWID. The syntax of the command was given as an example in Section 5.4.2.2 to show typical hwmgr internal command syntax.

The following command will fetch all attributes for all hardware components on the local system and direct the output to a file which you can then search for information:

# hwmgr -get attr > sysattr.txt

However, if you know which device category you want to query, as was demonstrated in Section 5.4.4.3, you can focus your query on that particular category.

Querying a hardware component category for its attributes can provide useful information. For example, you may not be sure if the network is working for some reason. You may not even know what type of network adapters are installed in a system or how they are configured. Use the -get attribute option to determine the status of network adapters as shown in the following example:

# hwmgr -get attr -category network
  203:
   name = ln0
   category = network
   sub_category = Ethernet
   model = DE422
   hardware_rev =
   firmware_rev =
   MAC_address = 08-00-2B-3E-08-09
   MTU_size = 1500
   media_speed = 10
   media_selection = Selected by Jumpers/Switches
   media_type =
   loopback_mode = 0
   promiscuous_mode = 0
   full_duplex = 0
   multicast_address_list = CF-00-00-00-00-00 \
    01-00-5E-00-00-01
   interface_number = 1

This output provides you with the following information:

In some cases, you can change the value of a device attribute to modify device information or change its behavior on the system. Setting attributes is described in Section 5.4.4.5. To find which attributes are settable, you can use the -get option to fetch all attributes and use the grep command to search for the for the (settable) keyword as follows:


# hwmgr -get attr | grep settable

   device_starvation_time = 25 (settable)
   device_starvation_time = 0 (settable)
   device_starvation_time = 25 (settable)
   device_starvation_time = 25 (settable)
   device_starvation_time = 25 (settable)
   device_starvation_time = 25 (settable)

The output shows that there is one settable attribute on the system, device_starvation_time. Having found this, you can now obtain a list of devices that support this attribute as follows:

# hwmgr -get attr -a device_starvation_time
  23:
   device_starvation_time = 25 (settable)
  24:
   device_starvation_time = 0 (settable)
  25:
   device_starvation_time = 25 (settable)
  31:
   device_starvation_time = 25 (settable)
  34:
   device_starvation_time = 25 (settable)
  35:
   device_starvation_time = 25 (settable)

The output from this command displays the HWID of the devices which support the device_starvation_time attribute. Reading the HWID in the hierarchy output, it can be further determined that this attribute is supported by SCSI disks.

See also the -set attribute and -get category options.

5.4.4.5    Setting Component Attributes

The -set attribute command option allows you to set (or configure) the value of settable KSM attributes (within the hardware set). Not all device attributes can be set. When you use the -get attribute command option, the output will flag any attributes that can be configured by labeling them as (settable) next to the attribute value. Finding such attributes is described in Section 5.4.4.4.

The command syntax for setting attribute values is as follows:

/sbin/hwmgr -set attribute [saved | current] {-a attribute} {-a attribute=...}... [-id hwid] [-member cluster-member-name] [-cluster]

As demonstrated in Section 5.4.4.4, the value of device_starvation_time is an example of a settable attribute supported by SCSI disks. This attribute controls the amount of time that must elapse before the disk driver determines that a device is unreachable due to SCSI bus starvation (no data transmitted). If the device_starvation_time expires before the driver is able to determine that the device is still there, the driver will post an error event to the binary error log.

Using the following commands, you can change the value of the device_starvation_time attribute for the device with the HWID of 24, and then verify the new value:

# hwmgr -set attr -id 24 -a device_starvation_time=60
# hwmgr -get attr -id 24 -a device_starvation_time
  24:
   device_starvation_time = 60 (settable)

This action does not change the saved value for this attribute. All attributes have three possible values, a current value, a saved value and a default value. The default value is a constant and can never be set. If you never set a value of an attribute, the default value applies. The saved value can be set and persists across boots. You can think of it as a permanent override of the default.

The current value can be set but does not persist across reboots. You can think of it as a temporary value for the attribute. When a system is rebooted, the value of the attribute will revert to the saved value (if there is a saved value). If there is no saved value the attribute value will revert to the default value. Setting an attribute value always changes the current value of the attribute. The following examples show how you get and set the saved value of an attribute:

# hwmgr -get attr saved -id 24 -a device_starvation_time
  24:
   saved device_starvation_time = 0 (settable)
 
# hwmgr -set attr saved -id 24 -a device_starvation_time=60 
    saved device_starvation_time = 60 (settable)
# hwmgr -get attr saved -id 24 -a device_starvation_time
  24:
   saved device_starvation_time = 60 (settable)

See also the -get attribute and -get category command options.

5.4.4.6    Viewing the Cluster

If you are working on a cluster, you often need to focus hardware management commands at a particular host on the cluster. The -view cluster command option enables you to obtain details of the hosts in a cluster. The following sample output shows a typical cluster:

  Member ID     State   Member HostName
  ---------     -----   ---------------
    1           UP      ernie.zok.paq.com (localhost)
    2           UP      bert.zok.paq.com
    3           DOWN    bigbird.zok.paq.com

This option can also be used to verify that the hwmgr utility is aware of all cluster members and their current status. The command has the following syntax:

/sbin/hwmgr -view cluster

The preceding example indicates a three member cluster with one member (bigbird) currently down. The (localhost) marker tells us that hwmgr is currently running on cluster member ernie. Any hwmgr commands that you enter using the -cluster option will be sent to members bert and ernie, but not to bigbird as that system is unavailable. Additionally, any hwmgr commands that you issue using the -member bigbird option will fail because the cluster member state for that host is DOWN.

Note that this command only works if the system is a member of a cluster. If you attempt to run it on a single system an error message is displayed. See also the clu_get_info command, and refer to the TruCluster documentation for more information on clustered systems.

5.4.4.7    Viewing Devices

You can use hwmgr to display all devices that have a device special file name, such as /dev/disk/dsk34 using the -view devices option. The hardware manager considers any hardware component that has the KSM attribute dev_base_name to be an accessible device. (See Section 5.4.4.4 for information on obtaining the attributes of a device.) This command has the following syntax:

/sbin/hwmgr -view devices [-category hardware_category] [-member cluster-member-name] [-cluster]

This command option enables you to determine what devices are currently registered with hardware management on a system, provides information that enables you to access these devices through their device special file. For example, if you load a CD-ROM into a reader, this output could be used to determine that the CD-ROM reader should be mounted as /dev/disk/cdrom0. The -view devices option is also useful to find the hardware identifiers (HWID) for any registered devices. When you know the HWID for a device, you can use other hwmgr command options to query KSM attributes on the device, or perform other operations on the device.

Typical output from this command is shown in the following example:

# hwmgr -view dev
 
 

  HWID:             DSF Name Mfg      Model       Location
 
----------------------------------------------------------------------
     3:            /dev/kevm
    22:      /dev/disk/dsk0c DEC      RZ26        bus-0-targ-3-lun-0
    23:    /dev/disk/cdrom0c DEC      RRD42       bus-0-targ-4-lun-0
    24:      /dev/disk/dsk1c DEC      RZ26L       bus-1-targ-2-lun-0
    25:      /dev/disk/dsk2c DEC      RZ26L       bus-1-targ-4-lun-0
    29:     /dev/ntape/tape0 DEC      TLZ06       bus-1-targ-6-lun-0
    35:     /dev/disk/dsk8c  COMPAQ   RZ1CF-CF    bus-2-targ-12-lun-0

The listing of devices shows all hardware components that have the dev_base_name attribute on the local system. The hardware manager attempts to resolve the dev_base_name to the full path location to the device special file, such as /dev/ntape/tape0. It always uses the path to the device special file with partition c because that partition is usually used to represent the entire capacity of the device, except in the case of tapes. See Section 5.5 for information on device special file names and functions.

If you are working on a cluster, you can view all devices registered with hardware management across the entire cluster with the -cluster option, as follows:

# hwmgr -view devices -cluster

  HWID:             DSF Name    Model       Location        Hostname
 
------------------------------------------------------------------
   20:   /dev/disk/floppy0c    3.5in          fdi0-unit-0   tril7e
   34:    /dev/disk/cdrom0c    RRD46   bus-0-targ-5-lun-0   tril7e
   35:      /dev/disk/dsk0c    HSG80   bus-4-targ-1-lun-1   tril7d
   35:      /dev/disk/dsk0c    HSG80   bus-6-targ-1-lun-1   tril7e
   36:      /dev/disk/dsk1c    RZ26N   bus-1-targ-0-lun-0   tril7e
   37:      /dev/disk/dsk2c    RZ26N   bus-1-targ-1-lun-0   tril7e
   38:      /dev/disk/dsk3c    RZ26N   bus-1-targ-2-lun-0   tril7e
   39:      /dev/disk/dsk4c    RZ26N   bus-1-targ-3-lun-0   tril7e
   40:      /dev/disk/dsk5c    RZ26N   bus-1-targ-4-lun-0   tril7e
   41:      /dev/disk/dsk6c    RZ26N   bus-1-targ-5-lun-0   tril7e
   42:      /dev/disk/dsk7c    RZ26N   bus-1-targ-6-lun-0   tril7e
   43:      /dev/disk/dsk8c    HSZ40   bus-3-targ-2-lun-0   tril7d
   43:      /dev/disk/dsk8c    HSZ40   bus-3-targ-2-lun-0   tril7e
   44:      /dev/disk/dsk9c    HSZ40   bus-3-targ-2-lun-1   tril7d
   44:      /dev/disk/dsk9c    HSZ40   bus-3-targ-2-lun-1   tril7e
   45:     /dev/disk/dsk10c    HSZ40   bus-3-targ-2-lun-2   tril7d
   45:     /dev/disk/dsk10c    HSZ40   bus-3-targ-2-lun-2   tril7e

Note that some devices, such as the disk with the HWID of 45:, appear more than once in this display. These are devices that are on a shared bus between two cluster members. The hardware manager displays the device entry as seen from each cluster member.

See also the following hwmgr command options: -show scsi, -show components, and -get attributes.

5.4.4.8    Viewing Transactions

Hardware management operations are transactions that need to be synchronized across a cluster. The -view transaction command option displays the state of any hardware management transactions that have occurred since the system was booted. This option can be used to check for failed hardware management transactions. The command option has the following syntax:

/sbin/hwmgr -view transactions

If you do not specify the -cluster or -member option, the command displays status on transactions that have been processed or initiated by the local host (the system on which the command is entered). Note that the -view transaction command is primarily for debugging problems with hardware management in a cluster, and you will not need to use this command very often, if ever. The command has the following typical output:

# hwmgr -view trans
 
   hardware management transaction status
  -----------------------------------------------------
  there is no active transaction on this system
   the last transaction initiated from this system was:
    transaction = modify cluster database
    proposal    = 3834
    sequence    = 0
    status      = 0
   the last transaction processed by this system was:
    transaction = modify cluster database
    proposal    = 3834
    sequence    = 0
    status      = 0
 
 proposal                      last status  success  fail
 ----------------------------  -----------  -------  ----
              Modify CDB/ 3838  0            3        0
                Read CDB/ 3834  0            3        0
            No operation/ 3835  0            1        0
             Change name/ 3836  0            0        0
             Change name/ 3837  0            0        0
               Locate HW/ 3832  0            0        0
                 Scan HW/ 3801  0            0        0
   Unconfig HW - confirm/ 3933  0            0        0
    Unconfig HW - commit/ 3934  0            0        0
     Delete HW - confirm/ 3925  0            0        0
      Delete HW - commit/ 3926  0            0        0
   Redirect HW - confirm/ 3928  0            0        0
   Redirect HW - commit1/ 3929  0            0        0
   Redirect HW - commit2/ 3930  0            0        0
          Refresh - lock/ 3937  0            0        0

From this output you can tell that the last transaction that occurred was a modification of the cluster database.

5.4.4.9    Deleting a SCSI Device

Under some circumstances, you may want to remove a SCSI device from a system, such as when it is logging device errors and must be replaced. Use the -delete scsi command option to remove a SCSI component from all hardware management databases cluster-wide. This option unregisters the component from the kernel, removes all persistent database entries for the device, and removes all device special files. When you delete a SCSI component it is no longer accessible and its device special files will be removed from the appropriate /dev subdirectory. Note that you cannot delete a SCSI component that is currently open, and all connections to the device (such as mounts) must be terminated.

Usually, you might delete a SCSI component if it was being removed from your system and you did not want to have any information about the device remaining on the system. You might also want to delete a SCSI component if there were software, rather than hardware problems. For example, if the device was operating properly but could not be accessed through the device special file for some reason. In this case you could delete the component and use the -scan scsi command option to find and register it as if it were a newly installed device.

To replace the SCSI device (or bring the old device back) you can use the -scan scsi command option to find the device again. However, when you delete a component and then perform a -scan operation to bring the component back on line, it may not be assigned the device special file name that it previously held. To replace a device as an exact replica of the original, you need to perform the additional operations described in Section 5.4.4.11. In addition, there is also no guarantee that the subsequent -scan operation will find the device if it is not actively responding during the bus scan.

The -delete scsi command option has the following syntax:

/sbin/hwmgr -delete scsi [-didscsi-device-identifier]

Note that the SCSI device identifier -did is not equivalent to the hardware identifier (HWID).

The following examples show how you check the SCSI database and then delete a SCSI device:

# hwmgr -show scsi
 
 

      SCSI           DEVICE DEVICE  DRIVER NUM  DEVICE FIRST
HWID: DEVICEID HOST- TYPE   SUBTYPE OWNER  PATH FILE   VALID
               NAME                                     PATH
  ----------------- -----------------------------------------
23:   0       bert   disk   none    2      1   dsk0   [0/3/0]
24:   1       bert   cdrom  none    0      1   cdrom0 [0/4/0]
25:   2       bert   disk   none    0      1   dsk1   [1/2/0]
30:   4       bert   tape   none    0      1   tape2  [1/6/0]
31:   3       bert   disk   none    0      1   dsk4   [1/4/0]
34:   5       bert   disk   none    0      1   dsk7   [2/5/0]
35:   6       bert   disk   none    0      1   dsk8

In this example, component ID 23 is currently open by a driver. You can see this because the DRIVER OWNER field is not zero, Any number other than zero in the DRIVER OWNER field means that a driver has opened the device for use. Therefore, you cannot delete SCSI component 23 because it is currently being used.

However, component ID 35 is not open by a driver, and it currently has no valid paths shown in the FIRST VALID PATH field. This means that the device is not currently accessible and can be safely deleted. The /dev/disk/dsk8* and /dev/rdisk/dsk8* device special files will also be deleted.

To delete the SCSI device, specify the SCSI DEVICEID value with the -delete option, and then review the SCSI database as follows:

# hwmgr -del scsi -did 6
   hwmgr: The delete operation was successful.
# hwmgr -show scsi
 
      SCSI            DEVICE  DEVICE  DRIVER NUM  DEVICE FIRST
HWID: DEVICE HOSTNAME TYPE    SUBTYPE OWNER  PATH FILE   VALID
      ID                                                  PATH
  -------------------------------------------------------------
23:   0      bert     disk    none    2      1   dsk0   [0/3/0]
24:   1      bert     cdrom   none    0      1   cdrom0 [0/4/0]
25:   2      bert     disk    none    0      1   dsk1   [1/2/0]
30:   4      bert     tape    none    0      1   tape2  [1/6/0]
31:   3      bert     disk    none    0      1   dsk4   [1/4/0]
34:   5      bert     disk    none    0      1   dsk7   [2/5/0]

The device /dev/disk/dsk8 has been successfully deleted.

5.4.4.10    Creating a User-Defined SCSI Device Name

Most devices have an identification attribute that is a unique to the device. This can be read as the serial_number or name attribute of a SCSI device. For example, the following hwmgr command will return both these attributes for the device HWID: 30, a SCSI disk:

# hwmgr -get attributes -id 30 -a serial_number -a name
30:
  serial_number = SCSI-WWID:0c000008:0060-9487-2a12-4ed2
  name = SCSI-WWID:0c000008:0060-9487-2a12-4ed2

This string is known as a world-wide identifier (WWID) because it is unique for every device on the system.

Some older devices do not provide a unique identifier, so the operating system will create such a number for the device using valid path bus/target/lun data that describes the physical location of the device. Because a device can be shared by more than one system (or more than one bus) each system that has access to the device will see a different path and will create its own unique WWID for that device. This creates the possibility of concurrent access to a device, and data on the device could be corrupted. To check for such devices, use the following command:


# hwmgr -show comp -cshared
 
 HWID:  HOSTNAME   FLAGS SERVICE COMPONENT NAME
-----------------------------------------------
   40:  joey       -cd-- iomap   SCSI-WWID:04100026:"DEC \
 RZ28M    (C) DEC00S846590H7CCX"
   41:  joey       -cd-- iomap   SCSI-WWID:04100026:"DEC \
 RZ28L-AS (C) DECJEE019480P2VSN"
   42:  joey       -cd-- iomap   SCSI-WWID:0410003a:"DEC \
 RZ28     (C) DECPCB=ZG34142470  ; HDA=000034579643"
   44:  joey       rcd-- iomap   SCSI-WWID:04100026:"DEC \
 RZ28M    (C) DEC00S735340H6VSR"
.
.
.
 
 

You can use hwmgr to create a user-defined unique name that will in turn enable you to create a WWID that is common to all systems that are sharing the device. This means that the device will have a common WWID and one set of device special file names.

The process for creating a user-defined name is as follows:

Caution

All systems with access to the device should be updated. Otherwise, the access controls which ensure data coherency may not be valid and data may be corrupted.

The -edit scsi command option has the following syntax:

/sbin/hwmgr -edit scsi [-diddevice-id] [-uwwiduser-defined-name] [-membercluster-member-name]

The following examples shows how you assign a user-defined name:

# hwmgr -show scsi
 
      SCSI           DEVICE DEVICE  DRIVER NUM  DEVICE FIRST
HWID: DEVICEID HOST  TYPE   SUBTYPE OWNER  PATH FILE   VALID
      ID       NAME                                    PATH
 ------------------------------------------------------------
  22: 0       ftwod  disk   none    0      1   dsk0   [0/3/0]
  23: 1       ftwod  cdrom  none    0      1   cdrom0 [0/4/0]
  24: 2       ftwod  disk   none    0      1   dsk1   [1/2/0]
  25: 3       ftwod  disk   none    2      1   dsk2   [2/4/0]

This command displays which SCSI devices are on the system. On this system the administrator knows that there is a shared bus and that hardware components 24 and 25 are actually the same device. The WWID constructed for this device is constructed using the bus/target/lun address information. Because the bus/target/lun addresses are different, the device is seen as two separate devices. This can cause data corruption problems because two sets of device special files can be used to access the disk (dev/disk/dsk1 and /dev/disk/dsk2).

The following command shows how you can rename the device, and demonstrates how it appears after being renamed:

# hwmgr -edit scsi -did 2 -uwwid "this is a test"
    hwmgr: Operation completed successfully.
 
# hwmgr -show scsi -did 2 -full

         SCSI                DEVICE    DEVICE  DRIVER NUM  DEVICE FIRST
  HWID:  DEVICEID HOSTNAME   TYPE      SUBTYPE OWNER  PATH FILE   VALID PATH
  -------------------------------------------------------------------------
    24:  2        ftwod      disk      none    0      1    dsk1   [1/2/0]
 
      WWID:0910003c:"DEC    (C) DECZG41400123ZG41800340:d01t00002l00000"
      WWID:ff10000e:"this is a test"
 
      BUS   TARGET  LUN   PATH STATE
      ------------------------------
      1     2       0     valid

The operation is repeated on the other device path and the same name is given to the device at address 2/4/0. When this is done hardware management will use the user defined name to track the device and recognize it as an alternate path to the same device:

# hwmgr -edit scsi -did 3 -uwwid "this is a test"
    hwmgr: Operation completed successfully.
 
# hwmgr -show scsi -did 3 -full

         SCSI                DEVICE    DEVICE  DRIVER NUM  DEVICE FIRST
  HWID:  DEVICEID HOSTNAME   TYPE      SUBTYPE OWNER  PATH FILE   VALID PATH
  -------------------------------------------------------------------------
    25:  3        ftwod     disk      none    0      1    dsk1   [2/4/0]
 
      WWID:0910003c:"DEC    (C) DECZG41400123ZG41800340:d02t00004l00000"
      WWID:ff10000e:"this is a test"
 
      BUS   TARGET  LUN   PATH STATE
      ------------------------------
      2     4       0     valid

Both of these devices now use device special file name /dev/disk/dsk1 and there is no longer a danger of data corruption as a result of two sets of device special files accessing the same disk.

5.4.4.11    Replacing a Failed SCSI Device

When a SCSI device fails, you may want to replace it in such a way that the replacement disk takes on hardware characteristics of the failed device, such as ownership of the same device special files. The -redirect command option enables you to assign such characteristics. For example, if you have an HSZ (RAID) cabinet and a disk fails, you can hot-swap the failed disk and then use the -redirect command option to bring the new disk on line as a replacement for the failed disk.

Note

The replacement device must be of the same device type for the -redirect operation to work.

This command has the following syntax:

/sbin/hwmgr -redirect scsi [-src scsi-device-id] [-dest scsi-device-id]

The following examples show how you use the -redirect option:

# /sbin/hwmgr -show scsi
      SCSI          DEVICE DEVICE DRIVER NUM  DEVICE  FIRST
HWID: DEVICE- HOST- TYPE   SUB-   OWNER  PATH FILE    VALID
      ID      NAME         TYPE                       PATH
  ---------------------------------------------------------
 23:   0     fwod  disk   none   2      1    dsk0   [0/3/0]
 24:   1     fwod  cdrom  none   0      1    cdrom0 [0/4/0]
 25:   2     fwod  disk   none   0      1    dsk1   [1/2/0]
 30:   4     fwod  tape   none   0      1    tape2  [1/6/0]
 31:   3     fwod  disk   none   0      1    dsk4
 37:   5     fwod  disk   none   0      1    dsk10  [2/5/0]

This output shows a failed SCSI disk of HWID 31. The device has no valid paths. To replace this failed disk with a new disk that has device special file name /dev/disk/dsk4, and the same dev_t information, use the following procedure:

  1. Install the device as described in the hardware manual.

  2. Use the following command to find the new device:

    
    # /sbin/hwmgr -scan scsi
    

    This command probes the SCSI subsystem for new devices and registers those devices. You can then repeat the -show scsi command and obtain the SCSI device id of the replacement device.

  3. Use the following command to reassign the device characteristics from the failed disk to the replacement disk. This example assumes that the SCSI device id (did) assigned to the new disk is 36:

    # /sbin/hwmgr -redirect scsi -src 3 -dest 36
    

5.4.4.12    Viewing the name Persistence Database

The name persistence database stores information about the hardware topology of the system. This data is maintained by the kernel and includes data for controllers and buses in addition to devices. Use the -show -name command option to display persistence data which you can then manipulate using other hwmgr commands. The command has the following syntax:

/sbin/hwmgr -show name [-member cluster-member-name]

The following example shows typical output from the -show -name command option on a small system:

# hwmgr -show name -member ychain
 
 HWID:  NAME    HOSTNAME   PERSIST TYPE    PERSIST AT
-----------------------------------------------------
   13:  isp0    ychain     BUS             pci0 slot 5
    4:  pci0    ychain     BUS             nexus
   14:  scsi0   ychain     CONTROLLER      isp0 slot 0
   29:  tu0     ychain     CONTROLLER      pci0 slot 11

The following information is provided by the output:

5.4.4.13    Deleting and Removing a Name from the Persistence Database

One of the options for manipulating the name subsystem is to remove devices from the persistence database. The hwmgr utility offers two methods of removal:

These commands have the following syntax:

/sbin/hwmgr -remove name [-entry name]

/sbin/hwmgr -delete name [-entry name]

Where name is the device name shown in the output from the -show name command option described in Section 5.4.4.12

The following example shows typical output from the -show name command option on a small system:

# hwmgr -show name
 HWID:  NAME    HOSTNAME  PERSIST TYPE    PERSIST AT
 
------------------------------------------------------
   33:  aha0    fegin     BUS             eisa0 slot 7
   31:  ln0     fegin     CONTROLLER      eisa0 slot 5
    8:  pci0    fegin     BUS             ibus0 slot 0
   34:  scsi1   fegin     CONTROLLER      aha0 slot 0
   17:  scsi0   fegin     CONTROLLER      psiop0 slot 0
   15:  tu0     fegin     CONTROLLER      pci0 slot 0

Note that there are two scsi adapters shown. If scsi0 is the target of a -remove operation then scsi1 would not become scsi0. The information of where the adapter is located persists at aha0 slot 0 and the name scsi1 is saved across boots.

To remove scsi0 and rename scsi1 you would use the following commands:

# hwmgr -remove  name -ent scsi0
# hwmgr -edit name -ent scsi1 -parent_num 0
 
 

5.5    Device Naming and Device Special Files

Devices are made available to the rest of the system through device special files located in the /dev directory. A device special file enables an application (such as a database application) to access a device through its device driver, which is a kernel module that controls one or more hardware components of a particular type. For example, network controllers , graphics controllers, and disk devices (including CD-ROM devices). See Section 5.4 for a discussion of system components.

Device special files are also used to access pseudodevice drivers that do not control a hardware component, for example, a pseudoterminal (pty) terminal driver, which simulates a terminal device. The pty terminal driver is a character driver typically used for remote logins; it is described in Section 5.6. (For detailed information on device drivers refer to the device driver documentation.)

Normally, device special file management is performed automatically by the system. For example, when you install a new version of the UNIX operating system, there is a point at which the system probes all buses and controllers and all the system devices are found. The system then builds databases that describe the devices and creates device special files which make them available to users. The most common way that you use a device special file is to specify it as the location of a UFS file system in the system /etc/fstab file, which is documented in Chapter 6.

You only need to perform manual operations on device special files when there are problems with the system or when you need to support a device that cannot be handled automatically. The following sections describes the way that devices and device special files are named and organized in Version 5.0 or higher. See Appendix B for information on other supported device mnemonics for legacy devices and their associated device names.

Note the following:

Legacy device names and device special files will be maintained for some time and their retirement schedule will be announced in a future release.

5.5.1    Related Documentation and Utilities

The following documents contain information about device names:

5.5.2    Device Special File Directories

You should be familiar with the file system hierarchy described in Chapter 6, in particular the implementation of Context Dependent Symbolic Links (CDSLs). CDSLs enable some devices to be available cluster-wide, when a system is part of a cluster.

For device special files, a /devices directory exists under / (root). This directory contains subdirectories that each contain device special files for a class of devices. A class of device corresponds to related types of devices, such as disks or nonrewind tapes. For example, the directory /dev/disk contains files for all supported disks, and /dev/ntape contains device special files for nonrewind tape devices. Currently, only the subdirectories for certain classes have been created. The available classes are defined in Appendix B. Note that in all operations you will need to specify paths using the /dev directory and not the /devices directory.

From the /dev directory, there are symbolic links to corresponding subdirectories to the /devices directory. For example:

lrwxrwxrwx 1 root system 25 Nov 11 13:02 ntape -> ../../../../devices/ntape

lrwxrwxrwx 1 root system 25 Nov 11 13:02 rdisk -> ../../../../devices/rdisk

lrwxrwxrwx 1 root system 24 Nov 11 13:02 tape -> ../../../../devices/tape

This structure enables certain devices to be host-specific when the system is a member of a cluster. It enables other devices to be shared between all members of a cluster. In addition, new classes of devices can be added by device driver developers and component vendors.

5.5.2.1    Legacy Device Special File Names

According to legacy device naming conventions, all device special files are stored in the /dev directory. The device special file names indicate the device type, its physical location, and other device attributes. Examples of the file name format disk and tape device special file names that use the legacy conventions are /dev/rz14f for a SCSI disk and /dev/rmt0a. The name contains the following information:

/path/prefix{root_name}{unit_number}{suffix}
/dev/          rmt           0          a
/dev/   r       rz           4          c
/dev/   n      rmt           12         h

This information is interpreted as follows:

The path is the directory for device special files. All device special files are placed in the /dev directory.

The prefix differentiates one set of device special files for the same physical device from another set, as follows:

The root_name is the two or three-character driver name, such as rz for SCSI disk devices, or rmt for tape devices.

The unit_number is the unit number of the device, as follows:

The suffix differentiates multiple device special files for the same physical device, as follows:

Legacy device naming conventions are supported so that scripts will continue to work as expected. However, features available with the current device naming convention may not work with the legacy naming convention. When Version 5.0 or higher is installed, none of the legacy device special files (such as rz13d) will be created during the installation. If you determine that legacy device special file naming is required, you will need to create the legacy device names using the appropriate commands described in dsfmgr(8). Note that some devices will not support legacy device special files.

5.5.2.2    Current Device Special File names

Current device special files imply abstract device names and convey no information about the device architecture or logical path to the device. The new device naming convention consists of a descriptive name for the device and an instance number. These two elements form the basename of the device as shown in Table 5-2.

Table 5-2:  Sample Current Device Special File Names

Location in /dev Device Name Instance Basename
/disk dsk 0 dsk0
/rdisk dsk 0 dsk0
/disk cdrom 1 cdrom1
/tape tape 0 tape0

A combination of the device name, with an system-assigned instance number creates a basename such as dsk0.

The current device special files are named according to the basename of the devices, and include a suffix that conveys more information about the device being addressed. This suffix will differ depending on the type of device, as follows:

5.5.2.3    Converting Device Special File Names

If you have shell scripts that use commands which act on device special files, you should note that any command or utility supplied with the operating system operates on current and legacy file names in one of the following ways:

Note however than no device can use both forms of device names simultaneously. You should test any shell scripts, and if necessary refer to the individual reference pages or on-line help for a utility.

If you want to update scripts, translating legacy names to the equivalent current name is a simple process. Table 5-3 shows some examples of legacy device names and corresponding current device names. Note that there is no relationship between the instance numbers. A device that was associated with device special file /dev/rz10b may be associated with /dev/disk/dsk2b under the current system.

Using these names as examples, you should be able to translate most device names that appear in your scripts. You can also use the utility dsfmgr(8) to convert device names.

Table 5-3:  Sample Device Name Translations

Legacy Device Special File Name New Device Special File Name
/dev/rmt0a /dev/tape/tape0
/dev/rmt1h /dev/tape/tape1_d1
/dev/nrmt0a /dev/ntape/tape0_d0
/dev/nrmt3m /dev/ntape/tape3_d2
/dev/rz0a /dev/disk/dsk0a
/dev/rz10g /dev/disk/dsk10g
/dev/rrz0a /dev/rdisk/dsk0a
/dev/rrz10b /dev/rdisk/dsk10b

5.5.3    Managing Device Special Files

In most cases, the management of device special files is undertaken by the system itself. During the initial full installation of the operating system, the device special files are created for every SCSI disk and SCSI tape device found on the system. If the system was updated from a previous version using the update installation procedure, both the current device special files and the legacy device files will exist. However, if you subsequently add new SCSI devices dsfmgr will only create new device special files by default. When the system is rebooted, dsfmgr is called automatically during the boot sequence to create the new device special files for the device. The system also automatically creates the device special files that it requires for pseudodevices such as ptys (pseudoterminals).

When you add a SCSI disk or tape device to the system, the new device will be located automatically, added to the hardware management databases, and its device special files will be created. On the first reboot after installation of the new device, dsfmgr is called automatically during the boot sequence to create the new device special files for that device.

However, under certain circumstances, you may need to perform manual administration of device special files, such as creating legacy device special files or verifying the device databases. The utility named dsfmgr enables you to manage device special files. Some devices or some system configuration changes may require the manual creation of a device special file.

To support applications that will only work with legacy device names, you may need to manually create the legacy device special files, either for every existing device, or for only for recently-added devices. Note however that some recent devices using features such as Fibre Channel will only support the current special device file naming convention.

The following sections describe some typical uses of dsfmgr. Refer to the dsfmgr(8) reference page for detailed information on the command syntax. The system script file /sbin/dn_setup, which runs at boot time to create device special files, provides an example of a script that uses dsfmgr command options.

5.5.3.1    Using dn_setup to Perform Generic Operations

The script /sbin/dn_setup script runs automatically at system start up to create device special file names. Normally, you will not need to use dn_setup options, however they will be useful if you need to troubleshoot device name problems or restore a damaged special device file directory or database files. See also Section 5.5.3.3.

If you frequently change your system configuration or install different versions of the operating system you may see device-related error messages at the system console during system start up. These messages might indicate that the system was unable to assign device special file names. This problem can occur when the saved configuration does not map to the current configuration. Adding or removing devices between installations can also cause the problem.

The command syntax is as follows:

/sbin/dn_setup [-sanity_check] [-boot] [-default] [-clean] [-default_config] [-init]

The dn_setup script has the following functions. Generally, only the -sanity_check option is useful to administrators. The remaining options should be used under the guidance of technical support for debugging and problem solving:

-sanity_check

Verifies the consistency and currency of the device special files and the directory hierarchy. The message Passed is displayed if the check is successful.

-boot

Runs at boot time to create all the default device special databases, files, and directories.

-default

Creates only the required device special directories.

-clean

Deletes everything in the device special directory tree and re-creates the entire tree (including device special files).

-default_config

Creates only the class and category databases.

-init

Removes all the default device special databases, files, and directories and re-creates everything.

5.5.3.2    Displaying Device Classes and Categories

Any individual type of device on the system is identified in the Category to Class-Directory, Prefix Database file, /etc/dccd.dat. You can display information in these databases using dsfmgr This information enables you to find out what devices are on a system, and obtain device identification attributes that can be used with other dsfmgr command options. For example, a class of devices have related physical characteristics, such as being disk devices. Each class of devices has its own directory in /dev such as /dev/ntape for nonrewind tape devices. Device classes are stored in the Device Class Directory Default Database file, /etc/dcdd.dat.

To view the entries in these databases, you use the following command:

# /sbin/dsfmgr -s

dsfmgr: show all datum for system at /
 
Device Class Directory Default Database:
     # scope mode  name
    --  ---  ----  -----------
     1   l   0755  .
     2   c   0755  disk
     3   c   0755  rdisk
     4   c   0755  tape
     5   c   0755  ntape
     6   l   0755  none
 
Category to Class-Directory, Prefix Database:
 #   category       sub_category   type        directory  iw  t mode prefix
--   -------------- -------------- ----------  ---------  --  - ---- --------
 1   disk           cdrom          block       disk        1  b 0600 cdrom
 2   disk           cdrom          char        rdisk       1  c 0600 cdrom
 3   disk           floppy         block       disk        1  b 0600 floppy
 4   disk           floppy         char        rdisk       1  c 0600 floppy
 5   disk           floppy_fdi     block       disk        1  b 0666 floppy
 6   disk           floppy_fdi     char        rdisk       1  c 0666 floppy
 7   disk           generic        block       disk        1  b 0600 dsk
 8   disk           generic        char        rdisk       1  c 0600 dsk
 9   parallel_port  printer        *           .           1  c 0666 lp
10   pseudo         kevm           *           .           0  c 0600 kevm
11   tape           *              norewind    ntape       1  c 0666 tape
12   tape           *              rewind      tape        1  c 0666 tape
13   terminal       hardwired      *           .           2  c 0666 tty
14   *              *              *           none        1  c 0000 unknown
 
Device Directory Tree:
   12800    2 drwxr-xr-x  6 root system 2048 May 23 09:38 /dev/.
     166    1 drwxr-xr-x  2 root system  512 Apr 25 15:58 /dev/disk
    6624    1 drwxr-xr-x  2 root system  512 Apr 25 11:37 /dev/rdisk
     180    1 drw-r--r--  2 root system  512 Apr 25 11:39 /dev/tape
    6637    1 drw-r--r--  2 root system  512 Apr 25 11:39 /dev/ntape
     181    1 drwxr-xr-x  2 root system  512 May  8 16:48 /dev/none
 
Dev Nodes:
 13100  0 crw-------  1 root system 79,  0 May  8 16:47 /dev/kevm
 13101  0 crw-------  1 root system 79,  2 May  8 16:47 /dev/kevm.pterm
 13102  0 crw-r--r--  1 root system 35,  0 May  8 16:47 /dev/tty00
 13103  0 crw-r--r--  1 root system 35,  1 May  8 16:47 /dev/tty01
 13104  0 crw-r--r--  1 root system 34,  0 May  8 16:47 /dev/lp0
   169  0 brw-------  1 root system 19, 17 May  8 16:47 /dev/disk/dsk0a
  6627  0 crw-------  1 root system 19, 18 May  8 16:47 /dev/rdisk/dsk0a
   170  0 brw-------  1 root system 19, 19 May  8 16:47 /dev/disk/dsk0b
  6628  0 crw-------  1 root system 19, 20 May  8 16:47 /dev/rdisk/dsk0b
   171  0 brw-------  1 root system 19, 21 May  8 16:47 /dev/disk/dsk0c
    
.
.
.

This display provides you with information that can be used with other dsfmgr commands. (Refer to the dsfmgr(8) reference page for a complete description of the fields in the databases). For example:

5.5.3.3    Verifying and Fixing the Databases

Under unusual circumstances, the device databases may be corrupted or device special files may accidentally be removed from the system. You may see errors indicating that a device is no longer available, but the device itself does not appear to be faulty. If you suspect that there may be a problem with the device special files, you can check the databases using the dsfmgr -v (verify) command option.

Caution

If you see error messages at system start up that indicate a device naming problem, you should use the verify command only to enable you to proceed with the boot. Check your system configuration before and after verifying the databases. The verification procedure will fix most errors and enable you to proceed, however it will not cure any underlying device or configuration problems.

Such problems are rare and usually arise when performing unusual operations such as switching between boot disks. Errors generally mean that the system was unable to recover and use a good copy of the previous configuration, and errors usually arise because the current system configuration no longer matches the database.

As for all potentially destructive system operations, you should always be able to restore the system to its identical previous configuration, and to restore the previous version of the operating system from your backup.

For example, if you attempted to configure the floppy disk device to use the mtools utilities, and you found that you could not access the device, you would use the following command:


# /sbin/dsfmgr -v
 
dsfmgr: verify all datum for system at /
 
Device Class Directory Default Database:
    OK.
 
Device Category to Class Directory Database:
    OK.
 
Dev directory structure:
    OK.
 
Dev Nodes:
    ERROR: device node does not exist: /dev/disk/floppy0a
    ERROR: device node does not exist: /dev/disk/floppy0c
  Errors:   2
 
Total errors:   2

This output shows that the device special files for the floppy disk device are missing. To correct this problem, use the same command with the -F (fix) flag to correct the errors as follows:

# /sbin/dsfmgr -v -F
 
dsfmgr: verify all datum for system at /
 
Device Class Directory Default Database:
    OK.
 
Device Category to Class Directory Database:
    OK.
 
Dev directory structure:
    OK.
 
Dev Nodes:
    WARNING: device node does not exist: /dev/disk/floppy0a
    WARNING: device node does not exist: /dev/disk/floppy0c
    OK.
 
Total warnings:   2

Notice that the ERROR changes to a WARNING, which indicates that the device special files for the floppy disk were created automatically. Repeating the dsfmgr -v command will then show no errors.

5.5.3.4    Deleting Device Special Files

If a device is permanently removed from the system, you may want to remove its device special file so that it can be reassigned to another type of device. Use the dsfmgr -D command option to remove device special files as shown in the following example:


# cd /dev/disk
# ls
cdrom0a   dsk0a     dsk0c     dsk0e     dsk0g     floppy0a
cdrom0c   dsk0b     dsk0d     dsk0f     dsk0h     floppy0c
 
#  /sbin/dsfmgr -D cdrom0*
 -cdrom0a -cdrom0a -cdrom0c -cdrom0c
# ls
dsk0a     dsk0c     dsk0e     dsk0g     floppy0a
dsk0b     dsk0d     dsk0f     dsk0h     floppy0c

Notice that the output from ls shows that there are device special files for cdrom0. Running dsfmgr -D on all cdrom devices, as shown by the wildcard symbol (*), causes all device special files for that sub_category to be permanently deleted. The message that follows repeats the basename (cdrom0) twice, because it also deletes the device special files from the /dev/rdisk directory where the raw or character device special files were located.

Note that if device special files are deleted in error, and no hardware changes are made then they can be recreated as follows:


#  /sbin/dsfmgr -n cdrom0a
 
  +cdrom0a +cdrom0a
#  /sbin/dsfmgr -n cdrom0c
 
  +cdrom0c +cdrom0c

5.5.3.5    Moving and Exchanging Device Special File Names

You may want to reassign the device special files between devices using the dsfmgr -m (move) command option. It is also possible to exchange the device special files of one device for those of another device using the-e option. The syntax for this command option is as follows:

/sbin/dsfmgr [-e-mbasename_1 [ basename_2 instance ] ]

Where:

For example:

#  /sbin/dsfmgr -m dsk0 dsk10
#  /sbin/dsfmgr -e dsk1 15

5.6    Manually Configuring Devices Using ddr_config

Most device management is automatic. A device added to a system will be recognized, mapped, and added to the device databases as described in Section 5.4. However, you may sometimes need to add devices that cannot be detected and added to the system automatically. These devices may be old, or new prototypes, or they may not adhere closely to supported standards such as SCSI. In these cases, you must manually configure the device and its drivers in the kernel, using the ddr_config utility described in this section.

The following sections describe how to create pseudoterminals (ptys), a terminal pseudodevice that enables remote logins.

There are two processes you use to effect the reconfiguration and rebuilding of a kernel: a static method and a dynamic method.

5.6.1    Dynamic Method to Reconfigure the Kernel

The following sections explain how to use the ddr_config utility to manage the DDR database for your system. These sections introduce DDR, then describe how you use the ddr_config utility to:

5.6.1.1    Understanding Dynamic Device Recognition

Dynamic Device Recognition is a framework for describing the operating parameters and characteristics of SCSI devices to the SCSI CAM I/O subsystem. You can use DDR to include new and changed SCSI devices into your environment without having to reboot the operating system. You do not disrupt user services and processes, as happens with static methods of device recognition.

DDR is preferred over the static method for recognizing SCSI devices. The current, static method, as described in Chapter 4, is to edit the /sys/data/cam_data.c data file and include custom SCSI device information, reconfigure the kernel, and shut down and reboot the operating system.

Note

Support for the static method of recognizing SCSI devices will be retired in a future release.

Both methods can be employed on the same system, with the restriction that the devices described by each method are exclusive to that method (nothing is doubly-defined).

The information DDR provides about SCSI devices is needed by SCSI drivers. You can supply this information using DDR when you add new SCSI devices to the system, or you can use the /sys/data/cam_data.c data file and static configuration methods. The information provided by DDR and the cam_data.c file have the same objectives. When compared to the static method of providing SCSI device information, DDR minimizes the amount of information that is supplied by the device driver or subsystem to the operating system and maximizes the amount of information that is supplied by the device itself or by defaults specified in the DDR databases.

5.6.1.1.1    Conforming to Standards

Devices you add to the system should minimally conform to the SCSI-2 standard, as specified in SCSI-2, Small Computer System Interface-2 (X3.131-1994), or other variants of the standard documented in the Software product Description. If your devices do not comply with the standard, or if they require exceptions from the standard, you store information about these differences in the DDR database. If the devices comply with the standard, there is usually no need to modify the database. Note however that such devices should be automatically recognized or configurable using hwmgr.

5.6.1.1.2    Understanding DDR Messages

Following are the most common DDR message categories and the action, if any, that you should take.

Use the -h option to the ddr_config command to display help on command options.

5.6.2    Changing the DDR Database

When you make a change to the operating parameters or characteristics of a SCSI device, you must describe the changes in the /etc/ddr.dbase file. You must compile the changes by using the ddr_config -c command.

Two common reasons for changes are:

You use the ddr_config -c command to compile the /etc/ddr.dbase file and produce a binary database file, /etc/ddr.db. When the kernel is notified that the file's state has changed, it loads the new /etc/ddr.dbase file. In this way, the SCSI CAM I/O subsystem is dynamically updated with the changes that you made in the /etc/ddr.dbase file and the contents of the on-disk database are synchronized with the contents of the in-memory database.

Use the following procedure to compile the /etc/ddr.dbase database:

  1. Log in as root or become the superuser.

  2. Enter the ddr_config -c command, for example:

    # /sbin/ddr_config -c 
    

Note that there is no message confirming successful completion. When the prompt is displayed, the compilation is complete. If there are syntax errors, they are displayed at standard output and no output file is compiled.

5.6.3    Converting Customized cam_data.c Information

You use the following procedure to transfer customized information about your SCSI devices from the /sys/data/cam_data.c file to the /etc/ddr.dbase text database. In this example, MACHINE is the name of your machine's system configuration file.

  1. Log on as root or become the superuser.

  2. To produce a summary of the additions and modifications that you should make to your /etc/ddr.dbase file, enter the ddr_config -x command. For example:

    
    # /sbin/ddr_config -x MACHINE > output.file
    

    This command uses as input the system configuration file that you used to build your running kernel. The procedure runs in multiuser mode and requires no input after it has been started. You should redirect output to a file in order to save the summary information. Compile errors are reported to standard error and the command terminates when the error is reported. Warnings are reported to standard error and do not terminate the command.

  3. Edit the characteristics that are listed on the output file into the /etc/ddr.dbase file, following the syntax requirements of that file. Instructions for editing the /etc/ddr.dbase database are found in ddr.dbase(4).

  4. Enter the ddr_config -c command to compile the changes.

See Section 5.6.2 for more information.

You can add pseudodevices, disks, and tapes statically, without using DDR, by using the methods described in the following sections.

5.6.4    Adding Pseudoterminals and Devices Without Using DDR

System V Release 4 (SVR4) pseudoterminals (ptys) are implemented by default and are defined as follows:

/dev/pts/N

The variable N is a number from 0-9999.

This implementation allows for more scalability than the BSD ptys (tty[a-zA-Z][0-9a-zA-Z]). The base system commands and utilities have been modified to support both SVR4 and BSD ptys. To revert back to the original default behavior, create the BSD ptys using MAKEDEV. See also the SYSV_PTY(8), pty(7), and MAKEDEV(8) reference pages.

5.6.4.1    Adding Pseudoterminals

Pseudoterminals enable users to use the network to access a system. A pseudoterminal is a pair of character devices that emulate a hardware terminal connection to the system. Instead of hardware, however, there is a master device and a slave device. Pseudoterminals, unlike terminals, have no corresponding physical terminal port on the system. Remote login sessions, window-based software, and shells use pseudoterminals for access to a system. By default, SVR4 device special files such as /dev/pts/<n> are created. You must use /dev/MAKEDEV to create BSD pseudoterminals such as /dev/ttyp/<n>. Two implementations of pseudoterminals are offered: BSD STREAMS and BSD clist.

For some installations, the default number of pty devices is adequate. However, as your user community grows, and each user wants to run multiple sessions of one or more timesharing machines in your environment, the machines may run out of available pty lines. The following command enables you to review the current value:

# sysconfig -q pts
pts:
nptys = 255

You can dynamically change the value with the sysconfig command, although this change will not be preserved across reboots:

# sysconfig -r pts nptys=400

To modify the value and preserve it across reboots, use the following procedure:

  1. Log in as root.

  2. Add or edit the pseudodevice entry in the system configuration file /etc/sysconfigtab. By default, the kernel supports 255 pseudoterminals. If you add more pseudoterminals to your system, you must edit the system configuration file entry and increment the number 255 by the number of pseudoterminals you want to add. The following examples show that 400 pseudoterminals have been added.

    
    pts:
    nptys=400
     
     
     
    

    The pseudodevice entry for clist-based pseudoterminals is as follows:

    pseudo-device pty 655
    

    For more information on the configuration file and its pseudodevice keywords, refer to Chapter 4.

  3. For clist-based pseudoterminals, you also need to rebuild and boot the new kernel. Use the information on rebuilding and booting the new kernel in Chapter 4.

    When the system is first installed, the configuration file contains a pseudodevice entry with the default number of 255 pseudoterminals. If for some reason the number is deleted and not replaced with another number, the system defaults to supporting the minimum value of 80 pseudoterminals. The maximum value is 131072.

If you want to create BSD terminals, use the /dev/MAKEDEV command as follows:

  1. Log in as root and change to the /dev directory.

  2. Create the device special files by using the MAKEDEV command, which has the following syntax:

    ./MAKEDEV pty#

    The number sign ( # ) represents the set of pseudoterminals (0 to 101) you want to create. The first 51 sets (0 to 50) create 16 pseudoterminals for each set. The last 51 sets (51 to 101) create 46 pseudoterminals for each set. You can use the following syntax to create a large number of pseudoterminals:

    ./MAKEDEV PTY_#

    The number sign ( # ) represents the set of pseudoterminals (1 to 9) you want to create. Each set creates 368 pseudoterminals, except the PTY_3 and PTY_9 sets, which create 356 and 230 pseudoterminals, respectively. (Refer to the Software Product Description (SPD) for the maximum number of supported pseudoterminals).

    Note

    By default, the installation software creates device special files for the first two sets of pseudoterminals, pty0 and pty1. The pty0 pseudoterminals have corresponding device special files named /dev/ttyp0 through /dev/ttypf. The pty1 pseudoterminals have corresponding device special files named /dev/ttyq0 through /dev/ttyqf.

    If you add pseudoterminals to your system, the pty# variable must be higher than pty1 because the installation software sets pty0 and pty1. For example, to create device special files for a third set of pseudoterminals, enter:

    # ./MAKEDEV pty2
    

    The MAKEDEV command lists the device special files it has created. For example:

    MAKEDEV: special file(s) for pty2:
    ptyr0 ttyr0 ptyr1 ttyr1 ptyr2 ttyr2 ptyr3 ttyr3 ptyr4 ttyr4
    ptyr5 ttyr5 ptyr6 ttyr6 ptyr7 ttyr7 ptyr8 ttyr8 ptyr9 ttyr9
    ptyra ttyra ptyrb ttyrb ptyrc ttyrc ptyrd ttyrd ptyre ttyre
    ptyrf ttyrf
    

  3. To remove BSD ptys, use the /dev/SYSV_PTY command.

  4. If you want to allow root logins on all pseudoterminals, make sure an entry for ptys is present in the /etc/securettys file. If you do not want to allow root logins on pseudoterminals, delete the entry for ptys from the /etc/securettys file. For example, to add the entries for the new tty lines and to allow root login on all pseudoterminals, enter the following lines in the /etc/securettys file:

    /dev/tty08     # direct tty
    /dev/tty09     # direct tty
    /dev/tty10     # direct tty
    /dev/tty11     # direct tty
    ptys
    

    Refer to the securettys(4) reference page for more information.

5.6.4.2    Adding Other Devices

When you add new SCSI devices to your system, they are automatically detected and configured by the Hardware Manager hwmgr and the Device Special File Manager dsfmgr. However, you may want to manually create device names for other devices using /dev/MAKEDEV. For example, you may also need to recreate device special files that were incorrectly deleted from the system.

For new devices, you must physically connect the devices and then make the devices known to the system. There are two methods, one for static drivers and another for loadable drivers. You will need the documentation that came with your system processor and any documentation that came with the device itself. You may also require a disk containing the driver software.

Appendix D provides an outline example of adding a PCMCIA modem to a system, and shows you how to create the device special files.

Note that it is not necessary to use /dev/MAKEDEV if you simply want to create legacy rz or tz device special files in /dev such as /dev/rz5. The dsfmgr utility provides a method of creating these device names. To add a device for a loadable driver, see the device driver documentation.

To add a device for a static driver, see Section 5.6.4.1.

Next, make the device special files for the device, by following these steps:

  1. Change to the /dev directory.

  2. Create the device special files by using the MAKEDEV command. Use the following syntax to invoke the MAKEDEV command:

    ./MAKEDEV device#

    The device variable is the device mnemonic for the drive you are adding. Appendix B lists the device mnemonics for all supported disk and tape drives. The number sign ( # ) is the number of the device. For example, to create the device special files for two PCMCIA modem cards, use the following command:

    # ./MAKEDEV ace2 ace3
    

    MAKEDEV: special file(s) for ace2:
    tty02
    MAKEDEV: special file(s) for ace3:
    tty03
    

    The generated special files should look like this:

    crw-rw-rw-   1 root     system    35,  2 Oct 27 14:02 tty02
    crw-rw-rw-   1 root     system    35,  3 Oct 27 14:02 tty03
    

  3. Stop system activity by using the shutdown command and then turn off the processor. Refer to Chapter 2 for more information.

  4. Power up the machine. To ensure that all the devices are seen by the system, power up the peripherals before powering up the system box.

  5. Boot the system with the new kernel. Refer to Chapter 2 for information on booting your processor.

5.7    Using Device Utilities

The preceding sections described generic hardware management tools that are used to manage many aspects of all devices, such as the hwmgr utility described in Section 5.4. The following sections describe hardware management tools that are targeted at a particular kind of device and perform specific task. The topics covered in these sections are:

5.7.1    Finding Device Utilities

Many of the device utilities are documented elsewhere in this guide or in other volumes of the documentation set. For example, utilities that enable you to configure network devices are documented in detail in the Network Administration guide. Table 5-4 provides references to utilities documented in the guides, including those listed in this chapter. Other utilities are documented only in reference pages. Table 5-5 provides references to utilities documented in the reference pages and also provides pointers to reference data such as the Section 7 interface reference pages.

Table 5-4:  Device Utilities Documented in the Guides

Device Task Location
Processor Starting or stopping Chapter 2
  Sharing resources Chapter 3, Class Scheduler.
  Monitoring Chapter 3 and Chapter 12 (Environmental)
  Power Management Chapter 3, dxpower.
  Testing memory Chapter 12
  Error and Event handling Chapter 12 and Chapter 13
SCSI buses Managing Section 5.7.2.1, scsimgr. (Note that hwmgr supersedes this utility)
  Configuring Section 5.7.2.2, scu.
Disks Partitioning and Cloning Section 5.7.3, diskconfig
  Copying Section 5.7.5, dd
  Monitoring usage Section 5.7.7, df and du
  Power Management Chapter 3
  File systems status Chapter 6
  Testing and exercising Chapter 12
Tapes (and Disks) Archiving Chapter 9
  Testing and exercising Chapter 12
Clock Setting Chapter 2
Modem Configuring Chapter 1

Table 5-5:  Device Utilities Documented in the Reference Pages

Device Task Location
Devices (General) Configuring hwmgr(8), devswmgr(8), dsfmgr(8).
  Device Special Files kmknod(8) , mknod(8), MAKEDEV(8), dsfmgr(8).
  Interfaces atapi_ide(7), devio(7), emx(7).
Processor Starting/Stopping halt(8), psradm(8), reboot(2).
  Allocating CPU resources class_scheduling(4), processor_sets(4), runon(1).
  Monitoring dxsysinfo(8), psrinfo(1).
SCSI buses Managing sys_attrs_cam(5), ddr.dbase(4) ddr_config(8).
Disks Partitioning diskconfig(8), disklabel(4), disklabel(8), disktab(4).
  Monitoring dxsysinfo(8) , diskusg(8), acctdisk(8), df(1), du(1), quota(1).
  Testing and Maintenance diskx(8), zeero(8).
  Interfaces ra(7), radisk(8), ri(7), rz(7).
  Swap Space swapon(8).
Tapes (and Disks) Archiving bttape(8), dxarchiver(8), rmt(8).
  Testing and Maintenance tapex(8).
  Interfaces tz(7), mtio(7), tms(7).
Floppy Tools dxmtools(1), mtools(1).
  Testing and Maintenance fddisk(8).
  Interfaces fd(7).
Terminals, Ports Interfaces ports(7).
Modem Configuring chat(8).
  Interfaces modem(7).
Keyboard, Mouse Interfaces dc(7), scc(7).

See Appendix A for a list of the utilities provided by SysMan.

5.7.2    SCSI and Device Driver Utilities

The following sections describe utilities that you use to manage SCSI devices and device drivers.

5.7.2.1    Using the SCSI Device Database Manager, scsimgr

The scsimgr utility is used to manage entries for SCSI devices in the /etc/dec_scsi_db database. This is a binary database that stores the logical identification assignments for SCSI devices, and preserves these identifications across system reboots. Most of the business of managing SCSI devices is managed automatically by the system. For example, you can add a new SCSI device (such as a disk) to a system and the system will detect the device on reboot, create database entries and create the device special files in /dev. Entries in the /etc/dec_scsi_db database are used to translate from a logical identifier (ID) of a device to a physical address. This information ensures that once a device is associated with a device identifier, it retains that identifier on the next reboot.

Note

You can now use hwmgr to perform all scsimgr operations. The scsimgr utility will be retired in a future release of the operating system.

5.7.2.2    Using the SCSI Configuration Utility, scu

The SCSI/CAM Utility Program, scu, provides commands necessary for normal maintenance and diagnostics of SCSI peripheral devices and the CAM I/O subsystem. The scu program has an extensive help feature which describes utility's commands and conventions. Refer also to the scu(8) reference page for detailed information on using this command.

You can use scu to:

DSA Disks

For Digital Storage Architecture (DSA) disks, use the radisk program. See the radisk(8) reference page for information.

Examples of scu usage are:


# scu 
scu> set nexus bus 0 target 0 lun 0
Device: RZ1CB-CA, Bus: 0, Target: 0, Lun: 0, Type: Direct Access
scu> show capacity
 
Disk Capacity Information:
 
                  Maximum Capacity: 8380080 (4091.836 megabytes)
                      Block Length: 512
scu> show scsi status 0
SCSI Status = 0 = SCSI_STAT_GOOD = Command successfully completed

5.7.2.3    Using the Device Switch Manager, devswmgr

The devswmgr command enables you to you manage the device switch table by displaying information about the device drivers in the table. You can also use the command to release device switch table entries. Typically, you release the entries for a driver after you have unloaded the driver and do not plan to reload it later. Releasing the entries frees them for use by other device drivers.

Examples of devswmgr usage for device data are:

# devswmgr -display 
device switch database read from primary file
  device switch table has 200 entries
# devswmgr -getnum 

Device switch reservation list
                          (*=entry in use)
  driver name             instance   major
-----------------------   --------   -----
                    pfm          1      71*
                    fdi          2      58*
                    xcr          2      57 
                   kevm          1      56*
               cam_disk          2      55*
                    emx          1      54 
                  TMSCP          2      53 
                   MSCP          2      52 
                    xcr          1      44 
                    LSM          4      43 
                    LSM          3      42 
                    LSM          2      41*
                    LSM          1      40*
                    ace          1      35*
          parallel_port          1      34*
               cam_uagt          1      30 
                   MSCP          1      28 
                  TMSCP          1      27 
                    scc          1      24 
                 presto          1      22 
                cluster          2      21*
                cluster          1      19*
                    fdi          1      14*
               cam_tape          1       9 
               cam_disk          1       8*
                    pty          2       7 
                    pty          1       6 
                    tty          1       1 
                console          1       0

5.7.3    Partitioning Disks Using diskconfig

The Disk Configuration graphical user interface (diskconfig) enables you to perform the following tasks:

See the diskconfig(8) reference page for information on invoking the Disk Configuration utility (diskconfig). An online help volume describes how you use the graphical interface. See the disklabel(8) reference page for information on command options.

The Disk Configuration utility provides a graphical interface to several disk maintenance tasks that can also be done manually, using the following commands:

An example of using manual methods is provided in Section 5.7.4.

The Disk Configuration interface can be invoked as follows:

Caution

The Disk Configuration will display appropriate warnings when you attempt to change partition sizes. However, you should plan the changes in advance to ensure that you do not overwrite any required data. Back up any data partitions before attempting this task.

A window titled Disk Configuration on hostname will be displayed. This is the main window for DiskConfig, and lists the following information for each disk:

Select a device by double-clicking on the list item (or press configure when a disk is highlighted) . The following windows will be displayed:

Disk Configuration: Configure Partitions: device name device type

This window provides the following information and options:

Disk Configuration: Partition Table: device name device type

This window displays a bar-chart of the current partitions in use, their sizes, and the file system in use. You can toggle between the current partition sizes, the default table for this device and the original (starting table) when this session was started. If you make errors on a manual partition change, you can use this window to reset the partition table.

Refer to the online help for more information on these windows.

After making partition adjustments, use the SysMan Menu options to mount any newly created file systems as follows:

Your new file system is now accessible.

5.7.4    Manually Partitioning Disks

This section provides the information you need to change the partition scheme of your disks. In general, you allocate disk space during the initial installation or when adding disks to your configuration. Usually, you do not have to alter partitions; however, there are cases when it is necessary to change the partitions on your disks to accommodate changes and to improve system performance.

The disk label provides detailed information about the geometry of the disk and the partitions into which the disk is divided. You can change the label with the disklabel command. You must be the root user to use the disklabel command.

There are two copies of a disk label, one located on the disk and one located in system memory. Because it is faster to access system memory than to perform I/O, when the system boots, it copies the disk label into memory. Use the disklabel -r command to directly access the label on the disk instead of going through the in-memory label.

Note

Before you change disk partitions, back up all the file systems if there is any data on the disk. Changing a partition overwrites the data on the old file system, destroying the data.

When changing partitions, remember that:

Caution

If partition a is mounted and you attempt to edit the disk label using device partition a, you will not be able to change the label. Furthermore, you will not receive an error message that would indicate that the label was not written.

Before changing the size of a disk partition, review the current partition setup by viewing the disk label. The disklabel command allows you to view the partition sizes. The bottom, top, and size of the partitions are in 512-byte sectors.

To review the current disk partition setup, use the following disklabel command syntax:

disklabel -r device

Specify the device with its directory name (/dev) followed by the raw device name, drive number, and partition a or c. You can also specify the disk unit and number, such as dsk1.

An example of using the disklabel command to view a disk label follows:

# disklabel -r /dev/rdisk/dsk3a 
type: SCSI
disk: rz26
label:
flags:
bytes/sector: 512
sectors/track: 57
tracks/cylinder: 14
sectors/cylinder: 798
cylinders: 2570
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # milliseconds
track-to-track seek: 0  # milliseconds
drivedata: 0

8 partitions:
#       size offset   fstype [fsize bsize cpg]
 a:  131072       0   4.2BSD   1024  8192  16  # (Cyl.    0 - 164*)
 b:  262144  131072   unused   1024  8192      # (Cyl.  164*- 492*)
 c: 2050860       0   unused   1024  8192      # (Cyl.    0 - 2569)
 d:  552548  393216   unused   1024  8192      # (Cyl.  492*- 1185*)
 e:  552548  945764   unused   1024  8192      # (Cyl. 1185*- 1877*)
 f:  552548 1498312   unused   1024  8192      # (Cyl. 1877*- 2569*)
 g:  819200  393216   unused   1024  8192      # (Cyl.  492*- 1519*)
 h:  838444 1212416   4.2BSD   1024  8192 16   # (Cyl. 1519*- 2569*)

You must be careful when you change partitions because you can overwrite data on the file systems or make the system inefficient. If the partition label becomes corrupted while you are changing the partition sizes, you can return to the default partition label by using the disklabel command with the -w option, as follows:

# disklabel -r -w /dev/rdisk/dsk1a rz26

The disklabel command allows you to change the partition label of an individual disk without rebuilding the kernel and rebooting the system. Use the following procedure:

  1. Display disk space information about the file systems by using the df command.

  2. View the /etc/fstab file to determine if any file systems are being used as swap space.

  3. Examine the disk's label by using the disklabel command with the -r option. Refer to the rz(7) and ra(7) reference pages and to the /etc/disktab file for information on the default disk partitions.

  4. Back up the file systems.

  5. Unmount the file systems on the disk whose label you want to change.

  6. Calculate the new partition parameters. You can increase or decrease the size of a partition. You can also cause partitions to overlap.

  7. Edit the disk label by using the disklabel command with the -e option to change the partition parameters, as follows:

    disklabel -e [-r] disk

    An editor, either the vi editor or that specified by the EDITOR environment variable, is invoked so you can edit the disk label, which is in the format displayed with the disklabel -r command.

    The -r option writes the label directly to the disk and updates the system's in-memory copy, if possible. The disk parameter specifies the unmounted disk (for example, dsk0 or /dev/rdisk/dsk0a).

    After you quit the editor and save the changes, the following prompt is displayed:

    write new label? [?]:
    

    Enter y to write the new label or n to discard the changes.

  8. Use the disklabel command with the -r option to view the new disk label.

5.7.4.1    Checking for Overlapping Partitions

Commands to mount or create file systems, add a new swap device, and add disks to the Logical Storage Manager first check whether the disk partition specified in the command already contains valid data, and whether it overlaps with a partition that is already marked for use. The fstype field of the disk label is used to determine when a partition or an overlapping partition is in use.

If the partition is not in use, the command continues to execute. In addition to mounting or creating file systems, commands like mount, newfs, fsck, voldisk, mkfdmn, rmfdmn, and swapon also modify the disk label, so that the fstype field specifies how the partition is being used. For example, when you add a disk partition to an AdvFS domain, the fstype field is set to AdvFS.

If the partition is not available, these commands return an error message and ask if you want to continue, as shown in the following example:

# newfs /dev/disk/dsk8c 
WARNING: disklabel reports that basename,partition currently
is being used as "4.2BSD" data. Do you want to
continue with the operation and possibly destroy
existing data? (y/n) [n]

Applications, as well as operating system commands, can modify the fstype of the disk label, to indicate that a partition is in use. See the check_usage(3) and set_usage(3) reference pages for more information.

5.7.5    Copying Disks

You can use the dd command to copy a complete disk or a disk partition; that is, you can produce a physical copy of the data on the disk or disk partition.

Note

Because the dd command was not meant for copying multiple files, you should copy a disk or a partition only on a disk that is used as a data disk or one that does not contain a file system. Use the dump and restore commands, as described in Chapter 9, to copy disks or partitions that contain a UFS file system. Use the vdump and vrestore commands, as described in AdvFS Administration, to copy disks or partitions that contain an AdvFS fileset.

UNIX protects the first block of a disk with a valid disk label because this is where the disk label is stored. As a result, if you copy a partition to a partition on a target disk that contains a valid disk label, you must decide whether you want to keep the existing disk label on that target disk.

If you want to maintain the disk label on the target disk, use the dd command with the skip and seek options to move past the protected disk label area on the target disk. Note that the target disk must be the same size as or larger than the original disk.

To determine if the target disk has a label, use the following disklabel command syntax:

disklabel -r target_device

You must specify the target device directory name (/dev) followed by the raw device name, drive number, and partition c. If the disk does not contain a label, the following message is displayed:

Bad pack magic number (label is damaged, or pack is unlabeled)
 

The following example shows a disk that already contains a label:

# disklabel -r /dev/rdisk/dsk1c
type: SCSI
disk: rz26
label:
flags:
bytes/sector: 512
sectors/track: 57
tracks/cylinder: 14
sectors/cylinder: 798
cylinders: 2570
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # milliseconds
track-to-track seek: 0  # milliseconds
drivedata: 0

8 partitions:
#      size  offset  fstype [fsize bsize  cpg]
 a:  131072       0  unused 1024 8192 # (Cyl.    0 - 164*)
 b:  262144  131072  unused 1024 8192 # (Cyl.  164*- 492*)
 c: 2050860       0  unused 1024 8192 # (Cyl.    0 - 2569)
 d:  552548  393216  unused 1024 8192 # (Cyl.  492*- 1185*)
 e:  552548  945764  unused 1024 8192 # (Cyl. 1185*- 1877*)
 f:  552548 1498312  unused 1024 8192 # (Cyl. 1877*- 2569*)
 g:  819200  393216  unused 1024 8192 # (Cyl.  492*- 1519*)
 h:  838444 1212416  unused 1024 8192 # (Cyl. 1519*- 2569*)

If the target disk already contains a label and you do not want to keep the label, you must clear the label by using the disklabel -z command. For example:


# disklabel -z /dev/rdisk/dsk1c

To copy the original disk to the target disk and keep the target disk label, use the following dd command syntax:

dd if=original_disk of=target_disk skip=16 seek=16 bs=block_size

Specify the device directory name (/dev) followed by the raw device name, drive number, and the original and target disk partitions. For example:

# dd if=/dev/rdisk/dsk0c of=/dev/rdisk/dsk1c \
skip=16 seek=16 bs=512k

5.7.6    Cloning a System Disk

This section suggests a procedure that can be used to clone a system disk. For example, you could move your system disk from a small disk to one with larger capacity without reinstalling the operating system. Cloning involves recreating the entire file system of one disk (target) on a new disk (clone). Note that this is not presented as a definitive method, and your local system may require additional steps. The operation is best undertaken while in single-user mode.

The process assumes that you have installed the new disk as described in the hardware documentation supplied with the disk.

  1. Identify the device special files for the source and target disks (dev/disk/dskNx). Use dsfmgr or hwmgr to identify and check disk characteristics. See Section 5.4 for information on using hwmgr and Section 5.5 for information on using dsfmgr.

  2. Examine and copy the /etc/fstab file. This file describes the partitions and file systems you will need to clone.

  3. Examine and copy the /etc/sysconfigtab file, which lists the swap partitions that you will need to re-create on the target disk. See Chapter 3 and the swapon(8) reference page.

  4. Use diskconfig as described in Section 5.7.3 to label and partition a target disk to receive the clone copy. The size of partitions may differ, but the layout and file system information must be identical to the source disk. For cloning a boot disk, you must write a boot block to the target disk.

    It is possible to change partition layouts if you do not want all source partitions, but you will need to modify the target fstab file.

  5. If you have AdvFS domains complete this step. Otherwise, go to step 6.

    Create domains for /, usr and var ensuring that the partitions are of equal or greater size. The following example assumes that the /var file system exists in /usr:

    # mkfdmn /dev/disk/dsk1a root_tmp
    # mkfdmn /dev/disk/dsk1g usr_tmp
     
    # mkfset root_tmp root
    # mkfset usr_tmp usr
    # mkfset usr_tmp var
     
    # mkdir /clone
    # mount root_tmp#root /clone
    # vdump -0 -f - / (cd /clone ; vrestore -x -f -)
     
    # mount usr_tmp#usr /clone/usr
    # vdump -0 -f - /usr (cd /clone/usr ; vrestore -x -f -)
     
    # mount usr_tmp#var /clone/var
    # vdump -0 -f - /var (cd /clone/var ; vrestore -x -f -)
    

    Next, correct the links in etc/fdmns. The copied version will be pointing to the original device special file. Change these links to point to the device special files for the newly created domains. For example:

    # cd /clone/etc/fdmns/root_domain
    # rm -r *
    # ln -s /dev/disk/dsk1a .
     
    # cd /clone/etc/fdmns/usr_domain
    # rm -r *
    # ln -s /dev/disk/dsk1g .
    

    If UFS is not in use on the source disk, go to step 7

  6. If you have UFS file systems on the source disk, complete this step. Otherwise go to step 7.

    Create a /clone mount point and mount the UFS partition (for example, a) of the target disk on /clone, as shown in the following example:

    
    # mount /dev/disk/dsk1a /clone
    

    Next, dump the partition as follows:

    # dump -0u -f - /dev/disk/disk0a | \
    (cd /clone ; restore -r -f -)
    

  7. Verify file ownerships and that all required file system branches were dumped. The following diff command sequence will help you do this and provide a record of the dump:

    
    # ls -R -l /clone > /newfiles
    # cd /
    # umount /clone
    # ls -R -l  > /newfiles
    # diff /newfiles /oldfiles > files.diff
    

  8. If differences occur, remount the source and correct them. You can edit the files.diff file to create a script that you run to correct errors.

  9. If you used this process to create a bootable clone disk, examine the /etc/fstab file before booting off the new disk. Make any necessary changes to partition mounts. Similarly, make any changes to swap in /etc/sysconfigtab.

  10. To test the clone, shut down and halt the system, then reboot specifying the new boot disk as follows:

    
    >>> show devices
    

    Determine the SCSI address of the target, and its configuration device name, such as DKxNNN.

    Boot from the cloned disk as follows:

    
    >>> boot DKA200
    

  11. If the boot is successful, and all system features appear to be functioning correctly, you can permanently swap the source and target disks by changing the appropriate console environment variables, physically swapping the devices, or using hwmgr.

The bootable tape utility described in Chapter 9 provides a method of creating a bootable standalone kernel on a magnetic tape. This method may enable faster recovery if you have problems with the root disk. Consider also some of the features offered by the Logical Storage Manager (LSM) that enable you to create a disk mirror as a copy of the root disk.

5.7.7    Monitoring Disk Use

To ensure an adequate amount of free disk space, you should regularly monitor the disk use of your configured file systems. You can do this in any of the following ways:

You can use the quota command only if you are the root user.

5.7.7.1    Checking Available Free Space

To ensure sufficient space for your configured file systems, you should regularly use the df command to check the amount of free disk space in all of the mounted file systems. The df command displays statistics about the amount of free disk space on a specified file system or on a file system that contains a specified file.

The df command has the following syntax:

df [- eiknPt ] [- F ] fstype ... | file | file_system

With no arguments or options, the df command displays the amount of free disk space on all of the mounted file systems. For each file system, the df command reports the file system's configured size in 512-byte blocks, unless you specify the -k option, which reports the size in kilobyte blocks. The command displays the total amount of space, the amount presently used, the amount presently available (free), the percentage used, and the directory on which the file system is mounted.

For AdvFS file domains, the df command displays disk space usage information for each fileset.

If you specify a device that has no file systems mounted on it, df displays the information for the root file system.

You can specify a file path name to display the amount of available disk space on the file system that contains the file.

Refer to the df(1) reference page for more information.

Note

You cannot use the df command with the block or character special device name to find free space on an unmounted file system. Instead, use the dumpfs command.

The following example displays disk space information about all the mounted file systems:

# /sbin/df 
Filesystem         512-blks   used  avail capacity Mounted on
/dev/disk/dsk2a      30686  21438   6178    77%  /
/dev/disk/dsk0g     549328 378778 115616    76%  /usr
/dev/disk/dsk2g     101372   5376  85858     5%  /var
/dev/disk/dsk3c     394796     12 355304     0%  /usr/users
/usr/share/mn@tsts  557614 449234  52620    89%  /usr/share/mn
domain#usr          838432 680320  158112   81%  /usr

Note

The newfs command reserves a percentage of the file system disk space for allocation and block layout. This can cause the df command to report that a file system is using more than 100 percent of its capacity. You can change this percentage by using the tunefs command with the -minfree flag.

5.7.7.2    Checking Disk Use

If you determine that a file system has insufficient space available, check how its space is being used. You can do this with the du command or the quot command.

The du command pinpoints disk space allocation by directory. With this information you can decide who is using the most space and who should free up disk space.

The du command has the following syntax:

/usr/bin/du [-aklrsx] [ directory... | filename... ]

The du command displays the number of blocks contained in all directories (listed recursively) within each specified directory, file name, or (if none are specified) the current working directory. The block count includes the indirect blocks of each file in 1-kilobyte units, independent of the cluster size used by the system.

If you do not specify any options, an entry is generated only for each directory. Refer to the du(1) reference page for more information on command options.

The following example displays a summary of blocks that all main subdirectories in the /usr/users directory use:


# /usr/bin/du -s /usr/users/*  
440     /usr/users/barnam
43      /usr/users/broland
747     /usr/users/frome
6804    /usr/users/morse
11183   /usr/users/rubin
2274    /usr/users/somer

From this information, you can determine that user rubin is using the most disk space.

The following example displays the space that each file and subdirectory in the /usr/users/rubin/online directory uses:

# /usr/bin/du -a /usr/users/rubin/online 
1	/usr/users/rubin/online/inof/license
2	/usr/users/rubin/online/inof
7	/usr/users/rubin/online/TOC_ft1
16	/usr/users/rubin/online/build
  .
  .
  .
251	/usr/users/rubin/online

Note

As an alternative to the du command, you can use the ls -s command to obtain the size and usage of files. Do not use the ls -l command to obtain usage information; ls -l displays only file sizes.

You can use the quot command to list the number of blocks in the named file system currently owned by each user. You must be root user to use the quot command.

The quot command has the following syntax:

/usr/sbin/quot [-c] [-f] [-n] [file_system]

The following example displays the number of blocks used by each user and the number of files owned by each user in the /dev/disk/dsk0h file system:

# /usr/sbin/quot -f /dev/disk/dsk0h 

Note

The character device special file must be used to return the information, because when the device is mounted the block special device file is busy.

Refer to the quot(8) reference page for more information.