[Contents] [Prev. Chapter] [Next Section] [Next Chapter] [Index] [Help]

2    Diagnosing Performance Problems

To get the maximum performance from a system, you must eliminate any performance bottlenecks. Diagnosing performance problems involves identifying the problem (for example, excessive paging and swapping), and then determining the source of the problem (for example, insufficient memory or incorrect virtual memory subsystem attribute values).

This chapter describes how to gather and analyze information that will help you diagnose performance problems. This chapter also describes how to modify kernel variables, attributes, and parameters. Later chapters describe how to correct performance problems found in various subsystems.


[Contents] [Prev. Chapter] [Next Section] [Next Chapter] [Index] [Help]

2.1    Checking System Performance

Although performance problems often are readily apparent (for example, applications complete slowly or the system logs messages stating that it is out of resources), other problems may not be obvious to users or administrators. In addition, it may be difficult to identify the source of the problem.

There are several ways to determine if a system has a performance problem or if you can improve system performance. Some indications of performance problems are as follows:

The following sections describe how to obtain information that will help you identify a performance problem and its source.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2    Obtaining Performance Information

To determine how your system is performing and to help diagnose performance problems, you must obtain information about your system. To do this, you need to log system events and monitor resources.

In addition, you must gather performance statistics under different conditions. For example, gather information when the system is running well and when system performance is poor. This will allow you to compare different sets of data.

After you set up your environment, immediately start to gather performance information by performing the following tasks:

The following sections describe these tasks in detail.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.1    Configuring Event Logging

The DIGITAL UNIX operating system uses the system event-logging facility and the binary event-logging facility to log system events. The log files can help you diagnose performance problems.

The system event-logging facility uses the syslog function to log events in ASCII format. The syslogd daemon collects the messages logged from the various kernel, command, utility, and application programs. The daemon then writes the messages to a local file or forwards the messages to a remote system, as specified in the /etc/syslog.conf default event-logging configuration file. See syslogd(8) for more information.

The binary event-logging facility detects hardware and software events in the kernel and logs detailed information in binary format records. The binary event-logging facility uses the binlogd daemon to collect various event-log records. The daemon then writes these records to a local file or forwards the records to a remote system, as specified in the /etc/binlog.conf default configuration file.

You can examine the binary event log files by using the DECevent utility, which translates the records from binary format to ASCII format. DECevent features can analyze the information and isolate the cause of the error. DECevent also can continuously monitor the log file and display information about system events.

You must register a license Product Authorization Key (PAK) to use DECevent's analysis and notification features, or these features may also be available as part of your DIGITAL service agreement. A PAK is not needed to use DECevent to translate the binary log file to ASCII format. See Section 2.2.3.1 for more information about DECevent.

You can also use the dia or the uerf command to translate binary log files to ASCII format. See dia(8) and uerf(8) for information.

After you install the operating system, you can customize system and binary event logging by modifying the default configuration files. See the System Administration manual and the Release Notes for more information about configuring event logging.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.2    Setting up System Accounting and Disk Quotas

System accounting allows you to obtain information about how users utilize resources. You can obtain information about the amount of CPU time and connect time, the number of processes spawned, memory and disk usage, the number of I/O operations, and the number of printing operations.

Disk quotas allow you to limit the disk space available to users and to monitor disk space usage. See the System Administration manual for information about setting up system accounting and UNIX file system (UFS) disk quotas. See the Advanced File System (AdvFS) documentation for information about AdvFS quotas.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.3    Choosing How to Monitor System Events

DIGITAL recommends that you set up a routine to continuously monitor system performance and to alert you when serious problems occur. There are a number of products and commands that provide system monitoring:

The following sections describe the DECevent utility, Performance Manager, and Performance Visualizer in detail.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.3.1    Using DECevent

The DECevent utility continuously monitors system events through the binary event-logging facility, decodes events, and tracks the number and the severity of events logged by system devices. DECevent attempts to isolate failing device components and provides a notification mechanism that can warn of potential problems.

DECevent determines if a threshold has been crossed, according to the number and severity of events reported. Depending on the type of threshold crossed, DECevent analyzes the events and notifies users of the events (for example, through mail). You must register a license PAK to use the DECevent analysis and notification features.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.3.2    Using Performance Manager

Performance Manager (PM) for DIGITAL UNIX allows you to simultaneously monitor many DIGITAL UNIX nodes, so you can detect and correct performance problems. PM can operate in the background, alerting you to performance problems. You can also configure PM to continuously monitor systems and data. Monitoring only a local node does not require a PM license. However, a PM license is required to monitor multiple nodes and clusters.

PM gathers and displays Simple Network Protocol (SNMP and eSNMP) data for the systems you choose, and allows you to detect and correct performance problems from a central location. PM has a graphical user interface (GUI) that runs locally and displays data from the monitored systems.

Use the GUI to choose the systems, data, and displays you want to monitor. You can customize and extend PM, so you can create and save performance monitoring sessions. Graphs and charts can show hundreds of different system values, including CPU performance, memory usage, disk transfers, file-system capacity, network efficiency, database performance, and AdvFS and cluster-specific metrics. Data archives can be used for high-speed playback or long-term trend analysis.

PM provides comprehensive thresholding, rearming, and tolerance facilities for all displayed metrics. You can set a threshold on every key metric, and specify the PM reaction when a threshold is crossed. For example, you can configure PM to send mail, to execute a command, or to display a notification message.

PM also has performance analysis and system management scripts, as well as cluster-specific and AdvFS-specific scripts. Run these scripts separately to target specific problems or run them simultaneously to check the general system performance. The PM analyses include suggestions for eliminating problems. PM automatically discovers cluster members when a single cluster member node is specified, and it can monitor both individual cluster members and an entire cluster concurrently.

See the Performance Manager online documentation for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.3.3    Using Performance Visualizer

Performance Visualizer is a valuable tool for developers of parallel applications. Because it monitors performance of several systems simultaneously, it allows you to see the impact of a parallel application on all the systems, and to ensure that the application is balanced across all systems. When problems are identified, you can change the application code and use Performance Visualizer to evaluate the effects of these changes. Performance Visualizer is a DIGITAL UNIX layered product and requires a license.

Performance Visualizer also helps you identify overloaded systems, underutilized resources, active users, and busy processes.

Using Performance Visualizer, you can monitor the following:

You can choose to look at all of the hosts in a parallel system or at individual hosts. See the Performance Visualizer documentation for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.2.4    Gathering Performance Statistics

Use the commands described in this chapter to gather performance statistics to benchmark your system, and to help identify performance problems. It is important to gather statistics from a variety of conditions. For example, gather information at the following opportunities:

In addition, you may want to use the sys_check utility to check your configuration and kernel variable settings. The sys_check utility uses some of the tools described in Section 2.3 to gather performance information and outputs this information in an easy-to-read format. The sys_check utility provides warnings and tuning recommendations if necessary. To obtain the sys_check utility, access the following location or call your customer service representative:

ftp://ftp.digital.com/pub/DEC/IAS/sys_check

See Section 2.3 for a list of tools that you can use to gather information about your system.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.3    Performance Monitoring Tools Overview

There are various utilities and commands that you can use to gather performance statistics and other information about the system. You may have to use a combination of tools to obtain a comprehensive picture of your system.

It is important for you to gather information about your system while it is running well, in addition to when it has poor performance. Comparing the two sets of data will help you to diagnose performance problems.

In addition to tools that gather system statistics, there are application profiling tools that allow you to collect statistics on CPU usage, call counts, call cost, memory usage, and I/O operations at various levels (for example, at a procedure level or at an instruction level). Profiling allows you to identify sections of code that consume large portions of execution time. In a typical program, most execution time is spent in relatively few sections of code. To improve performance, the greatest gains result from improving coding efficiency in time-intensive sections. There also are tools that you can use to debug or profile the system kernel and collect CPU statistics and other information.

The following tables describe the tools that you can use to gather resource statistics and profiling information. In addition, there are many freeware programs available in prebuilt formats on the DIGITAL UNIX Freeware CD-ROM. These include the top, lsof, and monitor commands. You can also use the Continuous Profiling Infrastructure dcpi tool, which provides continuous, low-overhead system profiling. The dcpi tool is available from the DIGITAL Systems Research Center at the following location:

http://www.research.digital.com/SRC/dcpi

Table 2-1 describes the tools you can use to gather information about CPU and memory usage.

Table 2-1:  CPU and Memory Monitoring Tools

Name Use Description

vmstat

Displays virtual memory and CPU usage statistics (Section 2.4.2)

Displays information about process threads, virtual memory usage (page lists, page faults, pageins, and pageouts), interrupts, and CPU usage (percentages of user, system and idle times). First reported are the statistics since boot time; subsequent reports are the statistics since a specified interval of time.

ps

Displays CPU and virtual memory usage by processes (Section 2.4.1)

Displays current statistics for running processes, including CPU usage, the processor and processor set, and the scheduling priority. The ps command also displays virtual memory statistics for a process, including the number of page faults, page reclamations, and pageins; the percentage of real memory (resident set) usage; the resident set size; and the virtual address size.

ipcs

Displays IPC statistics

Displays interprocess communication (IPC) statistics for currently active message queues, shared-memory segments, semaphores, remote queues, and local queue headers. The information provided in the following fields reported by the ipcs -a command can be especially useful: QNUM, CBYTES, QBYTES, SEGSZ, and NSEMS. See ipcs(1) for more information.

swapon

Displays information about swap space utilization (Section 2.4.4)

Displays the total amount of allocated swap space, swap space in use, and free swap space, and also displays this information for each swap device. You can also use the swapon command to allocate additional swap space.

uptime

Displays the system load average (Section 2.4.3)

Displays the number of jobs in the run queue for the last 5 seconds, the last 30 seconds, and the last 60 seconds. The uptime command also shows the number of users logged into the system and how long a system has been running.

w

Reports system load averages and user information

Displays the current time, the amount of time since the system was last started, the users logged in to the system, and the number of jobs in the run queue for the last 5 seconds, 30 seconds, and 60 seconds. The w command also displays information about system users, including login and process information. See w(1) for more information.

xload

Monitors the system load average

Displays the system load average in a histogram that is periodically updated. See xload(1X) for more information.

memx

Exercises system memory

Exercises memory by running a number of processes. You can specify the amount of memory to exercise, the number of processes to run, and a file for diagnostic output. Errors are written to a log file. See memx(8) for more information.

shmx

Exercises shared memory

Exercises shared memory segments by running a shmxb process. The shmx and shmxb processes alternate writing and reading the other process' data in the shared memory segments. You can specify the number of memory segments to test, the size of the segment, and a file for diagnostic output. Errors are written to a log file. See shmx(8) for more information.

kdbx cpustat

Reports CPU statistics (Section 2.4.5)

Displays CPU statistics, including the percentages of time the CPU spends in various states.

kdbx lockstats

Reports lock statistics (Section 2.4.6)

Displays lock statistics for each lock class on each CPU in the system.

dbx print vm_perfsum

Reports virtual memory statistics (Section 2.4.7)

You can check virtual memory by using the dbx debugger and examining the vm_perfsum data structure, which contains information about page faults, swap space, and the free page list.

Table 2-2 describes the tools you can use to obtain information about disk activity and usage.

Table 2-2:  General Disk Monitoring Tools

Name Use Description

iostat

Displays disk and CPU usage (Section 2.5.1)

Displays transfer statistics for each disk, and the percentage of time the system has spent in user mode, in user mode running low priority (nice) processes, in system mode, and in idle mode.

diskx

Tests disk driver functionality

Reads and writes data to disk partitions. The diskx exerciser analyzes data transfer performance, verifies the disktab database file entry, and tests reads, writes, and seeks. The diskx exerciser can destroy the contents of a partition. See diskx(8) for more information.

dbx print nchstats

Reports namei cache statistics (Section 2.5.2)

Reports namei cache statistics, including hit rates.

dbx print vm_perfsum

Reports UBC statistics (Section 2.5.3)

Reports Unified Buffer Cache (UBC) statistics, including the number of pages of memory that the UBC is using.

dbx print xpt_qhead, ccmn_bp_head, and xpt_cb_queue

Reports Common Access Method (CAM) statistics (Section 2.5.4)

Reports CAM statistics, including information about buffers and completed I/O operations.

Table 2-3 describes the tools you can use to obtain information about the UNIX File System (UFS).

Table 2-3:  UFS Monitoring Tools

Name Use Description

dumpfs

Displays UFS information (Section 2.6.1)

Displays detailed information about a UFS file system or a special device, including information about the file system fragment size, the percentage of free space, super blocks, and the cylinder groups.

dbx print ufs_clusterstats

Reports UFS clustering statistics (Section 2.6.2)

Reports statistics on how the system is performing cluster read and write transfers.

dbx print bio_stats

Reports UFS metadata buffer cache statistics (Section 2.6.3)

Reports statistics on the metadata buffer cache, including superblocks, inodes, indirect blocks, directory blocks, and cylinder group summaries.

fsx

Exercises file systems

Exercises UFS and AdvFS file systems by creating, opening, writing, reading, validating, closing, and unlinking a test file. Errors are written to a log file. See fsx(8) for more information.

Table 2-4 describes the tools you can use to obtain information about the Advanced File System (AdvFS).

Table 2-4:  AdvFS Monitoring Tools

Name Use Description

advfsstat

Displays AdvFS performance statistics (Section 2.7.1)

Allows you to obtain extensive AdvFS performance information, including buffer cache, fileset, volume, and bitfile metadata table (BMT) statistics, for a specific interval of time.

advscan

Identifies disks in a file domain (Section 2.7.2)

Locates pieces of AdvFS file domains on disk partitions and in LSM disk groups.

showfdmn

Displays detailed information about AdvFS file domains and volumes (Section 2.7.3)

Allows you to determine if files are evenly distributed across AdvFS volumes. The showfdmn utility displays information about a file domain, including the date created and the size and location of the transaction log, and information about each volume in the domain, including the size, the number of free blocks, the maximum number of blocks read and written at one time, and the device special file. For multivolume domains, the utility also displays the total volume size, the total number of free blocks, and the total percentage of volume space currently allocated.

showfile

Displays information about files in an AdvFS fileset (Section 2.7.4)

Displays detailed information about files (and directories) in an AdvFS fileset. The showfile command allows you to check a file's fragmentation. A low performance percentage (less than 80 percent) indicates that the file is fragmented on the disk. The command also displays the extent map of each file. An extent is a contiguous area of disk space that AdvFS allocates to a file. Simple files have one extent map; striped files have an extent map for every stripe segment. The extent map shows whether the entire file or only a portion of the file is fragmented.

showfsets

Displays AdvFS fileset information for a file domain (Section 2.7.5)

Displays information about the filesets in a file domain, including the fileset names, the total number of files, the number of free blocks, the quota status, and the clone status. The showfsets command also displays block and file quota limits for a file domain or for a specific fileset in the domain.

fsx

Exercises file systems

Exercises AdvFS and UFS file systems by creating, opening, writing, reading, validating, closing, and unlinking a test file. Errors are written to a log file. See fsx(8) for more information.

Table 2-5 describes the commands you can use to obtain information about the Logical Storage Manager (LSM).

Table 2-5:  LSM I/O Performance and Event Monitoring Tools

Name Use Description

volprint

Displays LSM disk configuration information (Section 2.8.1)

Displays information about LSM disk groups, disk media, volumes, plexes, and subdisk records. It does not display disk access records. See volprint(8) for more information.

volstat

Displays LSM I/O performance statistics (Section 2.8.2)

Displays performance statistics since boot time for all LSM objects (volumes, plexes, subdisks, and disks). These statistics include information about read and write operations, including the total number of operations, the number of failed operations, the number of blocks read or written, and the average time spent on the operation in a specified interval of time. The volstat utility also can reset the I/O statistics. See volstat(8) for more information.

voltrace

Tracks I/O operations on LSM volumes (Section 2.8.3)

Sets I/O tracing masks against one or all volumes in the LSM configuration and logs the results to the LSM default event log, /dev/volevent. The utility also formats and displays the tracing mask information and can trace the following ongoing LSM events: requests to logical volumes, requests that LSM passes to the underlying block device drivers, and I/O events, errors, and recoveries. See voltrace(8) for more information.

volwatch

Monitors LSM for object failures (Section 2.8.4)

Monitors LSM for failures in disks, volumes, and plexes, and sends mail if a failure occurs. The volwatch script starts automatically when you install LSM. See volwatch(8) for more information.

dxlsm

Displays statistics on LSM objects (Section 2.8.5)

Using the Analyze menu, displays information about LSM disks, volumes, and subdisks. See dxlsm(8) for more information.

Table 2-6 describes the commands you can use to obtain information about network operations.

Table 2-6:  Network Monitoring Tools

Name Use Description

netstat

Displays network statistics (Section 2.9.1)

Displays a list of active sockets for each protocol, information about network routes, and cumulative statistics for network interfaces, including the number of incoming and outgoing packets and packet collisions. Also, displays information about memory used for network operations.

nfsstat

Displays network and NFS statistics (Section 2.9.2)

Displays Network File System (NFS) and Remote Procedure Call (RPC) statistics for clients and servers, including the number of packets that had to be retransmitted (retrans) and the number of times a reply transaction ID did not match the request transaction ID (badxid).

tcpdump

Monitors network interface packets

Monitors and displays packet headers on a network interface. You can specify the interface on which to listen, the direction of the packet transfer, or the type of protocol traffic to display. The tcpdump command allows you to monitor the network traffic associated with a particular network service and to identify the source of a packet. It lets you determine whether requests are being received or acknowledged, or to determine the source of network requests, in the case of slow network performance. Your kernel must be configured with the packetfilter option to use the command. See tcpdump(8) and packetfilter(7) for more information.

traceroute

Displays the packet route to a network host

Tracks the route network packets follow from gateway to gateway. See traceroute(8) for more information.

ping

Determines if a system can be reached on the network

Sends an Internet Control Message Protocol (ICMP) echo request to a host in order to determine if a host is running and reachable and to determine if an IP router is reachable. Enables you to isolate network problems, such as direct and indirect routing problems. See ping(8) for more information.

nfswatch

Monitors an NFS server

Monitors all incoming network traffic to an NFS server and divides it into several categories, including NFS reads and writes, NIS requests, and RPC authorizations. The number and percentage of packets received in each category appears on the screen in a continuously updated display. Your kernel must be configured with the packetfilter option to use the command. See nfswatch(8) and packetfilter(7) for more information.

sobacklog_hiwat attribute

Reports the maximum number of pending requests to any server socket (Section 2.9.3)

Allows you to display the maximum number of pending requests to any server socket in the system.

sobacklog_drops attribute

Reports the number of backlog drops that exceed a socket's backlog limit (Section 2.9.3)

Allows you to display the number of times the system dropped a received SYN packet, because the number of queued SYN_RCVD connections for a socket equaled the socket's backlog limit.

somaxconn_drops attribute

Reports the number of drops that exceed the value of the somaxconn attribute (Section 2.9.3)

Allows you to display the number of times the system dropped a received SYN packet because the number of queued SYN_RCVD connections for a socket equaled the upper limit on the backlog length (somaxconn attribute).

ps axlmp

Displays information about idle threads (Section 2.9.4)

Displays information about idle threads on a client system.

Table 2-7 describes the commands you can use to obtain information about the kernel and applications. Detailed information about these profiling and debugging tools is located in the Programmer's Guide and the Kernel Debugging manual.

Table 2-7:  Profiling and Debugging Tools

Name Use Description

atom

Profiles applications

Consists of a set of prepackaged tools (third, hiprof, or pixie) that can be used to instrument applications for profiling or debugging purposes. The atom toolkit also consists of a command interface and a collection of instrumentation routines that you can use to create custom tools for instrumenting applications. See the Programmer's Guide manual and atom(1) for more information.

third

Checks memory access and detects memory leaks in applications

Performs memory access checks and memory leak detection of C and C++ programs at run time, by using the atom tool to add code to executable and shared objects. The Third Degree tool instruments the entire program, including its referenced libraries. See third(5) for more information.

hiprof

Produces a profile of procedure execution times in an application

An atom-based program profiling tool that produces a flat profile, which shows the execution time spent in any given procedure, and a hierarchical profile, which shows the time spent in a given procedure and all of its descendents. The hiprof tool uses code instrumentation rather than PC sampling to gather statistics. The gprof command is usually used to filter and merge output files and to format profile reports. See hiprof(5) for more information.

pixie

Profiles basic blocks in an application

Reads an executable program, partitions it into basic blocks, and writes an equivalent program containing additional code that counts the execution of each basic block. The pixie utility also generates a file containing the address of each of the basic blocks. When you run this pixie-generated program, it generates a file containing the basic block counts. The prof and pixstats commands can analyze these files. See pixie(5) for more information.

prof

Analyzes profiling data and displays a profile of statistics for each procedure in an application

Analyzes profiling data and produces statistics showing which portions of code consume the most time and where the time is spent (for example, at the routine level, the basic block level, or the instruction level). The prof command uses as input one or more data files generated by the kprofile, uprofile, or pixie profiling tools. The prof command also accepts profiling data files generated by programs linked with the -p switch of compilers such as cc. The information produced by prof allows you to determine where to concentrate your efforts to optimize source code. See prof(1) for more information.

gprof

Analyzes profiling data and displays procedure call information and statistical PC sampling in an application

Analyzes profiling data and allows you to determine which routines are called most frequently and the source of the routine call, by gathering procedure call information and performing statistical program counter (PC) sampling. The gprof tool produces a flat profile of the routines' CPU usage. To produce a graphical execution profile of a program, the tool uses data from PC sampling profiles, which are produced by programs compiled with the cc -pg command, or from instrumented profiles, which are produced by programs modified by the atom -tool hiprof command. See gprof(1) for more information.

kprofile

Produces a PC profile of a running kernel

Profiles a running kernel using the performance counters on the Alpha chip. You analyze the performance data collected by the tool with the prof command. See kprofile(1) for more information.

uprofile

Profiles user code in an application

Profiles user code using performance counters in the Alpha chip. The uprofile tool allows you to profile only the executable part of a program. The uprofile tool does not collect information on shared libraries. You process the performance data collected by the tool with the prof command. See the Kernel Debugging manual or uprofile(1) for more information.

dbx

Debugs running kernels, programs, and crash dumps, and examines and temporarily modifies kernel variables

Provides source-level debugging for C, Fortran, Pascal, assembly language, and machine code. The dbx debugger allows you to analyze crash dumps, trace problems in a program object at the source-code level or at the machine code level, control program execution, trace program logic and flow of control, and monitor memory locations. Use dbx to debug kernels, debug stripped images, examine memory contents, debug multiple threads, analyze user code and applications, display the value and format of kernel data structures, and temporarily modify the values of some kernel variables. See dbx(8) for more information.

kdbx

Debugs running kernels and crash dumps

Allows you to examine a running kernel or a crash dump. The kdbx debugger, a frontend to the dbx debugger, is tailored specifically to debugging kernel code and displays kernel data in a readable format. The debugger is extensible and customizable, allowing you to create commands that are tailored to your kernel debugging needs. You can also use extensions to check resource usage (for example, CPU usage). See dbx(8) for more information.

ladebug

Debugs kernels and applications

Debugs programs and the kernel and helps locate run-time programming errors. The ladebug symbolic debugger is an alternative to the dbx debugger and provides both command-line and graphical user interfaces and support for debugging multithreaded programs. See the Ladebug Debugger Manual and ladebug(1) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4    Gathering CPU and Virtual Memory Information

Use the following commands to obtain information about CPUs and the virtual memory subsystem:

The following sections describe these commands in detail.

In addition, you can use the following commands to obtain CPU and virtual memory information:


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.1    Using ps to Display CPU and Memory Usage

The ps command displays the current status of the system processes. You can use it to determine the current running processes (including users), their state, and how they utilize system memory. The command lists processes in order of decreasing CPU usage, so you can identify which processes are using the most CPU time. Note that the ps command provides only a snapshot of the system; by the time the command finishes executing, the system state has probably changed. In addition, one of the first lines of the command may refer to the ps command itself.

An example of the ps command is as follows:

# ps aux
USER  PID  %CPU %MEM   VSZ   RSS  TTY S    STARTED      TIME  COMMAND
chen  2225  5.0  0.3  1.35M  256K p9  U    13:24:58  0:00.36  cp /vmunix /tmp
root  2236  3.0  0.5  1.59M  456K p9  R  + 13:33:21  0:00.08  ps aux
sorn  2226  1.0  0.6  2.75M  552K p9  S  + 13:25:01  0:00.05  vi met.ps
root   347  1.0  4.0  9.58M  3.72 ??  S      Nov 07 01:26:44  /usr/bin/X11/X -a
root  1905  1.0  1.1  6.10M  1.01 ??  R    16:55:16  0:24.79  /usr/bin/X11/dxpa
mat   2228  0.0  0.5  1.82M  504K p5  S  + 13:25:03  0:00.02  more
mat   2202  0.0  0.5  2.03M  456K p5  S    13:14:14  0:00.23  -csh (csh)
root     0  0.0 12.7   356M  11.9 ??  R <  Nov 07 3-17:26:13  [kernel idle]
             [1]  [2]     [3]     [4]     [5]                 [6]       [7]
 

The ps command output includes the following information that you can use to diagnose CPU and virtual memory problems:

  1. Percentage of CPU time usage (%CPU) [Return to example]

  2. Percentage of real memory usage (%MEM) [Return to example]

  3. Process virtual address size (VSZ)--This is the total amount of virtual memory allocated to the process. [Return to example]

  4. Real memory (resident set) size of the process (RSS)--This is the total amount of physical memory mapped to virtual pages (that is, the total amount of memory that the application has physically used). Shared memory is included in the resident set size figures; as a result, the total of these figures may exceed the total amount of physical memory available on the system. [Return to example]

  5. Process status or state (S)--This specifies whether a process is in one of the following states:

    [Return to example]

  6. Current CPU time used (TIME). [Return to example]

  7. The command that is running (COMMAND). [Return to example]

From the output of the ps command, you can determine which processes are consuming most of your system's CPU time and memory and whether processes are swapped out. Concentrate on processes that are runnable or paging. Here are some concerns to keep in mind:

For information about memory tuning, see Chapter 4. For information about improving the performance of your applications, see the Programmer's Guide.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.2    Using vmstat to Display Virtual Memory and CPU Statistics

The vmstat command shows the virtual memory, process, and total CPU statistics for a specified time interval. The first line of the output is for all time since a reboot, and each subsequent report is for the last interval. Because the CPU operates faster than the rest of the system, performance bottlenecks usually exist in the memory or I/O subsystems.

To determine the amount of memory on your system, use the uerf -r 300 command. The beginning of the listing shows the total amount of physical memory (including wired memory) and the amount of available memory.

An example of the vmstat command is as follows; output is provided in one-second intervals:

# vmstat 1
Virtual Memory Statistics: (pagesize = 8192)
procs        memory            pages                       intr        cpu
r  w  u  act  free wire  fault cow zero react pin pout   in  sy  cs  us sy  id
2 66 25  6417 3497 1570  155K  38K  50K    0  46K    0    4 290 165   0  2  98
4 65 24  6421 3493 1570   120    9   81    0    8    0  585 865 335  37 16  48
2 66 25  6421 3493 1570    69    0   69    0    0    0  570 968 368   8 22  69
4 65 24  6421 3493 1570    69    0   69    0    0    0  554 768 370   2 14  84
4 65 24  6421 3493 1570    69    0   69    0    0    0  865  1K 404   4 20  76
               [1]                                    [2]       [3]         [4]
 

The vmstat command includes information that you can use to diagnose CPU and virtual memory problems. The following fields are particularly important:

  1. Virtual memory information (memory), including the number of pages that are on the active list, including inactive pages and Unified Buffer Cache least-recently used (UBC LRU) pages (act); the number of pages on the free list (free), and the number of pages on the wire list (wire). Pages on the wire list cannot be reclaimed. See Chapter 4 for more information on page lists. [Return to example]

  2. The number of pages that have been paged out (pout). [Return to example]

  3. Interrupt information (intr), including the number of nonclock device interrupts per second (in), the number of system calls called per second (sy), and the number of task and thread context switches per second (cs). [Return to example]

  4. CPU usage information (cpu), including the percentage of user time for normal and priority processes (us), the percentage of system time (sy), and the percentage of idle time (id). User time includes the time the CPU spent executing library routines. System time includes the time the CPU spent executing system calls. [Return to example]

When diagnosing a bottleneck situation, keep the following issues in mind:

See Chapter 3 for information on improving CPU performance and Chapter 4 for information on tuning memory.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.3    Using uptime to Display the Load Average

The uptime command shows how long a system has been running and the load average. The load average counts jobs that are waiting for disk I/O, and applications whose priorities have been changed with either the nice or renice command. The load average numbers give the average number of jobs in the run queue for the last 5 seconds, the last 30 seconds, and the last 60 seconds.

An example of the uptime command is as follows:

# uptime
1:48pm  up 7 days,  1:07,  35 users,  load average: 7.12, 10.33, 10.31

The command output displays the current time, the amount of time since the system was last started, the number of users logged into the system, and the load averages for the last 5 seconds, the last 30 seconds, and the last 60 seconds.

From the command output, you can determine whether the load is increasing or decreasing. An acceptable load average depends on your type of system and how it is being used. In general, for a large system, a load of 10 is high, and a load of 3 is low. Workstations should have a load of 1 or 2. If the load is high, look at what processes are running with the ps command. You may want to run some applications during offpeak hours. You can also lower the priority of applications with the nice or renice command to conserve CPU cycles.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.4    Using swapon to Display Swap Space Usage

Use the swapon -s command to display your swap device configuration. For each swap partition, the command displays the total amount of allocated swap space, the amount of swap space that is being used, and the amount of free swap space. This information can help you determine how your swap space is being utilized.

An example of the swapon command is as follows:

# swapon -s
Swap partition /dev/rz2b (default swap):
    Allocated space:        16384 pages (128MB)
    In-use space:               1 pages (  0%)
    Free space:             16383 pages ( 99%)
 
Swap partition /dev/rz12c:
    Allocated space:       128178 pages (1001MB)
    In-use space:               1 pages (  0%)
    Free space:            128177 pages ( 99%)
 
 
Total swap allocation:
    Allocated space:       144562 pages (1129MB)
    Reserved space:          2946 pages (  2%)
    In-use space:               2 pages (  0%)
    Available space:       141616 pages ( 97%)

See Chapter 4 and Chapter 5 for information on how to configure swap space. Use the iostat command to determine which disks are being used the most.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.5    Checking CPU Usage With kdbx cpustat

The kdbx cpustat extension displays CPU statistics, including the percentages of time the CPU spends in the following states:

By default, kdbx displays statistics for all CPUs in the system.

For example:

(kdbx)cpustat
 Cpu   User (%)    Nice (%) System (%)  Idle (%)   Wait (%)
===== ========== ========== ========== ========== ==========
    0       0.23       0.00       0.08      99.64       0.05
    1       0.21       0.00       0.06      99.68       0.05

See the Kernel Debugging manual and kdbx(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.6    Checking Lock Usage With kdbx lockstats

The kdbx lockstats extension displays lock statistics for each lock class on each CPU in the system, including the following information:

See the Kernel Debugging manual and kdbx(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.4.7    Checking Virtual Memory With dbx vm_perfsum

You can check virtual memory by using the dbx command and examining the vm_perfsum data structure.

An example of the dbx print vm_perfsum command is as follows:

(dbx) print vm_perfsum
struct {
    vpf_pagefaults = 2657316
    vpf_kpagefaults = 23527
    vpf_cowfaults = 747352
    vpf_cowsteals = 964903
    vpf_zfod = 409170
    vpf_kzfod = 23491
    vpf_pgiowrites = 6768
    vpf_pgwrites = 12646
    vpf_pgioreads = 981605
    vpf_pgreads = 80157
    vpf_swapreclaims = 0
    vpf_taskswapouts = 1404
    vpf_taskswapins = 1386
    vpf_vmpagesteal = 0
    vpf_vmpagewrites = 7304
    vpf_vmpagecleanrecs = 14898
    vpf_vplmsteal = 36
    vpf_vplmstealwins = 33
    vpf_vpseqdrain = 2
    vpf_ubchit = 3593
    vpf_ubcalloc = 133065
    vpf_ubcpushes = 3
    vpf_ubcpagepushes = 3
    vpf_ubcdirtywra = 1
    vpf_ubcreclaim = 0
    vpf_ubcpagesteal = 52092
    vpf_ubclookups = 2653080
    vpf_ubclookuphits = 2556591
    vpf_reactivate = 135376
    vpf_allocatedpages = 6528
    vpf_vmwiredpages = 456
    vpf_ubcwiredpages = 0
    vpf_mallocpages = 1064
    vpf_totalptepages = 266
    vpf_contigpages = 3
    vpf_rmwiredpages = 0
    vpf_ubcpages = 2785
    vpf_freepages = 190
    vpf_vmcleanpages = 215
    vpf_swapspace = 8943
}
(dbx)

Important fields include the following:

To obtain information about the current use of memory, use the dbx print command to display the values of the following kernel variables:

The following example shows the current value of the vm_page_free_count kernel variable:

(dbx) print vm_page_free_count
336

See Chapter 4 for information on managing memory resources.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.5    Gathering General Disk Information

Use the following commands to gather general information about disks:

The following sections describe these commands in detail. You can also use the diskx exerciser to test disk drivers. See diskx(8).


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.5.1    Using iostat to Display Disk Usage

The iostat command reports I/O statistics for terminals, disks, and the CPU. The first line of the output is the average since boot time, and each subsequent report is for the last interval.

An example of the iostat command is as follows; output is provided in one-second intervals:

# iostat 1
      tty     rz1      rz2      rz3      cpu
 tin tout bps tps  bps tps  bps tps  us ni sy id
  0    3   3   1    0   0    8   1   11 10 38 40
  0   58   0   0    0   0    0   0   46  4 50  0
  0   58   0   0    0   0    0   0   68  0 32  0
  0   58   0   0    0   0    0   0   55  2 42  0

The iostat command reports I/O statistics that you can use to diagnose disk I/O performance problems. For example, the command displays the following information:

The iostat command can help you to do the following:

See Chapter 5 for more information on how to improve disk performance.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.5.2    Checking the namei Cache With dbx nchstats

The namei cache is used by UFS, AdvFS, CD-ROM File System (CDFS), and NFS to store recently used file system pathname/inode number pairs. It also stores inode information for files that were referenced but not found. Having this information in the cache substantially reduces the amount of searching that is needed to perform pathname translations.

To check the namei cache, use the dbx debugger and look at the nchstats data structure. In particular, look at the ncs_goodhits, ncs_neghits, and ncs_misses fields to determine the hit rate. The hit rate should be above 80 percent (ncs_goodhits plus ncs_neghits divided by the sum of the ncs_goodhits, ncs_neghits, and ncs_misses).

Consider the following example:

(dbx) print nchstats
struct {
    ncs_goodhits = 9748603   -found a pair
    ncs_neghits = 888729     -found a pair that didn't exist
    ncs_badhits = 23470
    ncs_falsehits = 69371
    ncs_miss = 1055430       -did not find a pair
    ncs_long = 4067          -name was too long to fit in the cache
    ncs_pass2 = 127950
    ncs_2passes = 195763
    ncs_dirscan = 47
}
(dbx)
 

See Chapter 5 for information on how to improve the namei cache hit rate and lookup speeds.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.5.3    Checking the UBC With dbx vm_perfsum

To check the Unified Buffer Cache (UBC), use the dbx debugger to examine the vm_perfsum data structure. In particular, look at the vpf_pgiowrites field (number of I/O operations for pageouts generated by the page stealing daemon) and the vpf_ubcalloc field (number of times the UBC had to allocate a page from the virtual memory free page list to satisfy memory demands).

Consider the following example:

(dbx) print vm_perfsum
struct {
    vpf_pagefaults = 493749
    vpf_kpagefaults = 3851
    vpf_cowfaults = 144197
    vpf_cowsteals = 99541
    vpf_zfod = 65590
    vpf_kzfod = 3846
    vpf_pgiowrites = 863
    vpf_pgwrites = 1572
    vpf_pgioreads = 187350
    vpf_pgreads = 17228
    vpf_swapreclaims = 0
    vpf_taskswapouts = 297
    vpf_taskswapins = 272
    vpf_vmpagesteal = 0
    vpf_vmpagewrites = 843
    vpf_vmpagecleanrecs = 1270
    vpf_vplmsteal = 18
    vpf_vplmstealwins = 16
    vpf_vpseqdrain = 0
    vpf_ubchit = 398
    vpf_ubcalloc = 21683
    vpf_ubcpushes = 0
    vpf_ubcpagepushes = 0
    vpf_ubcdirtywra = 0
    vpf_ubcreclaim = 0
    vpf_ubcpagesteal = 7071
    vpf_ubclookups = 364856
    vpf_ubclookuphits = 349473
    vpf_reactivate = 17352
    vpf_allocatedpages = 5800
    vpf_vmwiredpages = 437
    vpf_ubcwiredpages = 0
    vpf_mallocpages = 1115
    vpf_totalptepages = 207
    vpf_contigpages = 3
    vpf_rmwiredpages = 0
    vpf_ubcpages = 2090
    vpf_freepages = 918
    vpf_vmcleanpages = 213
    vpf_swapspace = 7996
}
(dbx)

The vpf_ubcpages field gives the number of pages of physical memory that the UBC is using to cache file data. If the UBC is using significantly more than half of physical memory and the paging rate is high (vpf_pgiowrites field), you may want to reduce the amount of memory available to the UBC to reduce paging. The default value of the ubc-maxpercent attribute is 100 percent of physical memory. Decrease this value only by increments of 10. However, reducing the value of the ubc-maxpercent attribute may degrade file system performance.

You can also monitor the UBC by examining the ufs_getapage_stats kernel data structure. To calculate the hit rate, divide the value for read_hits by the value for read_looks. A good hit rate is a rate above 95 percent.

Consider the following example:

(dbx) print ufs_getapage_stats
struct {
    read_looks = 2059022
    read_hits = 2022488
    read_miss = 36506
}
(dbx)

In the previous example, the hit rate is approximately 98 percent.

You can also check the UBC by examining the vm_tune data structure and the vt_ubcseqpercent and vt_ubcseqstartpercent fields. These values are used to prevent a large file from completely filling the UBC, which limits the amount of memory available to the virtual memory subsystem.

Consider the following example:

(dbx) print vm_tune
struct {
    vt_cowfaults = 4
    vt_mapentries = 200
    vt_maxvas = 1073741824
    vt_maxwire = 16777216
    vt_heappercent = 7
    vt_anonklshift = 17
    vt_anonklpages = 1
    vt_vpagemax = 16384
    vt_segmentation = 1
    vt_ubcpagesteal = 24
    vt_ubcdirtypercent = 10
    vt_ubcseqstartpercent = 50
    vt_ubcseqpercent = 10
    vt_csubmapsize = 1048576
    vt_ubcbuffers = 256
    vt_syncswapbuffers = 128
    vt_asyncswapbuffers = 4
    vt_clustermap = 1048576
    vt_clustersize = 65536
    vt_zone_size = 0
    vt_kentry_zone_size = 16777216
    vt_syswiredpercent = 80
    vt_inswappedmin = 1
}

When copying large files, the source and destination objects in the UBC will grow very large (up to all of the available physical memory). Reducing the value of the vm-ubcseqpercent attribute decreases the number of UBC pages that will be used to cache a large sequentially accessed file. The value represents the percentage of UBC memory that a sequentially accessed file can consume before it starts reusing UBC memory. The value imposes a resident set size limit on a file.

See Chapter 4 for information on how to tune the UBC.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.5.4    Monitoring CAM Data Structures With dbx

The operating system uses the Common Access Method (CAM) as the operating system interface to the hardware. CAM maintains the xpt_qhead, ccmn_bp_head, and xpt_cb_queue data structures as follows:

The following examples use the dbx debugger to examine these three data structures:

(dbx) print xpt_qhead
struct {
    xws = struct {
        x_flink = 0xffffffff81f07400
        x_blink = 0xffffffff81f03000
        xpt_flags = 2147483656
        xpt_ccb = (nil)
        xpt_nfree = 300
        xpt_nbusy = 0
    }
    xpt_wait_cnt = 0
    xpt_times_wait = 2
    xpt_ccb_limit = 1048576
    xpt_ccbs_total = 300
    x_lk_qhead = struct {
        sl_data = 0
        sl_info = 0
        sl_cpuid = 0
        sl_lifms = 0
    }
}
(dbx) print ccmn_bp_head
struct {
    num_bp = 50
    bp_list = 0xffffffff81f1be00
    bp_wait_cnt = 0
}
(dbx) print xpt_cb_queue
struct {
    flink = 0xfffffc00004d6828
    blink = 0xfffffc00004d6828
    flags = 0
    initialized = 1
    count = 0
    cplt_lock = struct {
        sl_data = 0
        sl_info = 0
        sl_cpuid = 0
        sl_lifms = 0
    }
}
(dbx)

If the values for xpt_wait_cnt or bp_wait_cnt are nonzero, CAM has run out of buffer pool space. If this situation persists, you may be able to eliminate the problem by changing one or more of CAM's I/O attributes (see Chapter 5).

The count parameter in xpt_cb_queue is the number of I/O operations that have been completed and are ready to be passed back to a peripheral device driver. Normally, the value of count should be 0 or 1. If the value is greater than 1, it may indicate either a problem or a temporary situation in which a large number of I/O operations are completing simultaneously. If repeated monitoring demonstrates that the value is consistently greater than 1, one or more subsystems may require tuning.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.6    Gathering UFS Information

Use the following commands to gather information about the UNIX file system (UFS):

The following sections describe these commands in detail.

In addition, you can use the fsx exerciser to test UFS and AdvFS file systems. See fsx(8) for information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.6.1    Using dumpfs to Display UFS Information

The dumpfs command displays UFS information, including super block and cylinder group information, for a specified file system. Use this command to obtain information about the file system fragment size and the minimum free space percentage.

The following example shows part of the output of the dumpfs command:

# dumpfs /dev/rrz3g | more
magic   11954   format  dynamic time    Tue Sep 14 15:46:52 1996
nbfree  21490   ndir    9       nifree  99541   nffree  60
ncg     65      ncyl    1027    size    409600  blocks  396062
bsize   8192    shift   13      mask    0xffffe000
fsize   1024    shift   10      mask    0xfffffc00
frag    8       shift   3       fsbtodb 1
cpg     16      bpg     798     fpg     6384    ipg     1536
minfree 10%     optim   time    maxcontig 8     maxbpg  2048
rotdelay 0ms    headswitch 0us  trackseek 0us   rps     60

The information contained in the first lines are relevant for tuning. Of specific interest are the following fields:

See Chapter 5 for more information about improving disk I/O performance.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.6.2    Checking UFS Clustering With dbx ufs_clusterstats

To check UFS by using the dbx debugger, examine the ufs_clusterstats data structure to see how efficiently the system is performing cluster read and write transfers. You can examine the cluster reads and writes separately with the ufs_clusterstats_read and ufs_clusterstats_write data structures.

The following example shows a system that is not clustering efficiently:

(dbx) print ufs_clusterstats
struct {
    full_cluster_transfers = 3130
    part_cluster_transfers = 9786
    non_cluster_transfers = 16833
    sum_cluster_transfers = {
        [0] 0
        [1] 24644
        [2] 1128
        [3] 463
        [4] 202
        [5] 55
        [6] 117
        [7] 36
        [8] 123
        [9] 0
    }
}
(dbx)

The preceding example shows 24644 single-block transfers and no 9-block transfers. A single block is 8 KB. The trend of the data shown in the example is the reverse of what you want to see. It shows a large number of single-block transfers and a declining number of multiblock (1-9) transfers. However, if the files are all small, this may be the best blocking that you can achieve.

See Chapter 5 for information on how to tune a UFS file system.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.6.3    Checking the Metadata Buffer Cache With dbx bio_stats

The metadata buffer cache contains UFS file metadata--superblocks, inodes, indirect blocks, directory blocks, and cylinder group summaries. To check the metadata buffer cache, use the dbx debugger to examine the bio_stats data structure.

Consider the following example:

(dbx) print bio_stats
struct {
    getblk_hits = 4590388
    getblk_misses = 17569
    getblk_research = 0
    getblk_dupbuf = 0
    getnewbuf_calls = 17590
    getnewbuf_buflocked = 0
    vflushbuf_lockskips = 0
    mntflushbuf_misses = 0
    mntinvalbuf_misses = 0
    vinvalbuf_misses = 0
    allocbuf_buflocked = 0
    ufssync_misses = 0
}
(dbx)

If the miss rate is high, you may want to raise the value of the bufcache attribute. The number of block misses (getblk_misses) divided by the sum of block misses and block hits (getblk_hits) should not be more than 3 percent.

See Chapter 4 for information on how to tune the metadata buffer cache.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7    Gathering AdvFS Information

Use the following commands to gather information about the Advanced File System (AdvFS):

The following sections describe these commands in detail.

In addition, you can use the fsx exerciser to test AdvFS and UFS file systems. See fsx(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7.1    Using advfsstat to Display AdvFS Performance Information

The advfsstat command displays various AdvFS performance statistics. The command reports information in units of one disk block (512 bytes) for each interval of time (the default is one second).

Use the advfsstat command to monitor the performance of AdvFS domains and filesets. Use this command to obtain detailed information, especially if the iostat command output indicates a disk bottleneck.

The advfsstat command displays detailed information about a file domain, including information about the AdvFS buffer cache, fileset vnode operations, locks, the namei cache, and volume I/O performance. You can use the -i option to output information at specific time intervals.

For example:

# advfsstat -v 2 test_domain
vol1
  rd  wr  rg  arg  wg  awg  blk  wlz  rlz  con  dev
  54   0  48  128   0    0    0    1   0     0   65

Compare the number of read requests (rd) to the number of write requests (wr). Read requests are blocked until the read completes, but write requests will not block the calling thread, which increases the throughput of multiple threads.

Consolidating reads and writes improves performance. The consolidated read values (rg and arg) and write values (wg and awg) indicate the number of disparate reads and writes that were consolidated into a single I/O to the device driver. If the number of consolidated reads and writes decreases compared to the number of reads and writes, AdvFS may not be consolidating I/O.

The I/O queue values (blk to dev) can indicate potential performance issues. The con value specifies the number of entries on the consolidate queue. These entries are ready to be consolidated and moved to the device queue. The device queue value (dev) shows the number of I/O requests that have been issued to the device controller. The system must wait for these requests to complete. If this number of I/O requests on the device queue increases continually and you experience poor performance, applications may be I/O bound on this device.

If an application is I/O bound, you may be able to eliminate the problem by adding more disks to the domain or by striping disks. If the values for both the consolidate queue (con) and the device queue (dev) are large during periods of poor performance, you may want to increase the value of the AdvfsMaxDevQLen attribute. See Section 5.6.2.6 for information about modifying the attribute.

You can monitor the type of requests that applications are issuing by using the advfsstat command's -f flag to display fileset vnode operations. You can display the number of file creates, reads, and writes and other operations for a specified domain or fileset.

The following example shows the startup, running, and completion times for an application:

# advfsstat -i 3 -f 2 scratch_domain fset1
  lkup  crt geta read writ fsnc dsnc   rm   mv rdir  mkd  rmd link
     0    0    0    0    0    0    0    0    0    0    0    0    0
     4    0   10    0    0    0    0    2    0    2    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0
    24    8   51    0    9    0    0    3    0    0    4    0    0
  1201  324 2985    0  601    0    0  300    0    0    0    0    0
  1275  296 3225    0  655    0    0  281    0    0    0    0    0
  1217  305 3014    0  596    0    0  317    0    0    0    0    0
  1249  304 3166    0  643    0    0  292    0    0    0    0    0
  1175  289 2985    0  601    0    0  299    0    0    0    0    0
   779  148 1743    0  260    0    0  182    0   47    0    4    0
     0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0

See advfsstat(8) for more information. Note that it is difficult to link performance problems to some statistics such as buffer cache statistics. In addition, lock performance that is related to lock statistics cannot be tuned.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7.2    Using advscan to Identify Disks in an AdvFS File Domain

The advscan command locates pieces of AdvFS domains on disk partitions and in LSM disk groups. Use the advscan command when you have moved disks to a new system, have moved disks around in a way that has changed device numbers, or have lost track of where the domains are. You can also use this command for repair if you delete the /etc/fdmns directory, delete a directory domain under /etc/fdmns, or delete some links from a domain directory under /etc/fdmns.

The advscan command accepts a list of volumes or disk groups and searches all partitions and volumes in each. It determines which partitions on a disk are part of an AdvFS file domain. You can run the advscan command to rebuild all or part of your /etc/fdmns directory, or you can manually rebuild it by supplying the names of the partitions in a domain.

The following example scans two disks for AdvFS partitions:

# advscan rz0 rz5
 
Scanning disks  rz0 rz5
Found domains:
 
usr_domain
                Domain Id       2e09be37.0002eb40
                Created         Thu Jun 23 09:54:15 1996
 
                Domain volumes          2
                /etc/fdmns links        2
 
                Actual partitions found:
                                        rz0c
                                        rz5c
 

For the following example, the rz6 file domains were removed from /etc/fdmns. The advscan command scans device rz6 and re-creates the missing domains.

# advscan -r rz6
 
Scanning disks  rz6
Found domains:
 
*unknown*
                Domain Id       2f2421ba.0008c1c0
                Created         Mon Jan 23 13:38:02 1996
 
                Domain volumes          1
                /etc/fdmns links        0
 
                Actual partitions found:
                                        rz6a*
*unknown*
                Domain Id       2f535f8c.000b6860
                Created         Tue Feb 28 09:38:20 1996
 
                Domain volumes          1
                /etc/fdmns links        0
 
                Actual partitions found:
                                        rz6b*
 
Creating /etc/fdmns/domain_rz6a/
        linking rz6a
 
Creating /etc/fdmns/domain_rz6b/
        linking rz6b
 

See advscan(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7.3    Using showfdmn to Display AdvFS File Domain Information

The showfdmn command displays the attributes of an AdvFS file domain and detailed information about each volume in the file domain.

The following example of the showfdmn command displays domain information for the usr file domain:

% showfdmn usr
 
               Id              Date Created  LogPgs  Domain Name
2b5361ba.000791be  Tue Jan 12 16:26:34 1996     256  usr
 
Vol   512-Blks      Free  % Used  Cmode  Rblks  Wblks  Vol Name
 1L     820164    351580     57%     on    256    256  /dev/rz0d

See showfdmn(8) for more information about the output of the command.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7.4    Using showfile to Display AdvFS File Information

The showfile command displays the full storage allocation map (extent map) for one or more files in an AdvFS fileset. An extent is a contiguous area of disk space that AdvFS allocates to a file. The following example of the showfile command displays the AdvFS-specific attributes for all of the files in the current working directory:

# showfile *
 
       Id  Vol  PgSz  Pages  XtntType  Segs  SegSz    I/O  Perf File
  22a.001    1    16      1    simple    **     **  async  50%  Mail
    7.001    1    16      1    simple    **     **  async  20%  bin
  1d8.001    1    16      1    simple    **     **  async  33%  c
 1bff.001    1    16      1    simple    **     **  async  82%  dxMail
  218.001    1    16      1    simple    **     **  async  26%  emacs
  1ed.001    1    16      0    simple    **     **  async 100%  foo
  1ee.001    1    16      1    simple    **     **  async  77%  lib
  1c8.001    1    16      1    simple    **     **  async  94%  obj
  23f.003    1    16      1    simple    **     **  async 100%  sb
 170a.008    1    16      2    simple    **     **  async  35%  t
    6.001    1    16     12    simple    **     **  async  16%  tmp
 

The I/O column specifies whether the I/O operation is synchronous or asynchronous.

The following example of the showfile command shows the attributes and extent information for the tutorial file, which is a simple file:

# showfile -x tutorial
 
        Id  Vol  PgSz  Pages  XtntType  Segs  SegSz    I/O  Perf    File
 4198.800d    2    16     27    simple    **     **  async   66% tutorial
 
     extentMap: 1
          pageOff    pageCnt    vol    volBlock    blockCnt
                0          5      2      781552          80
                5         12      2      785776         192
               17         10      2      786800         160
       extentCnt: 3
 

The Perf entry shows the efficiency of the file-extent allocation, expressed as a percentage of the optimal extent layout. A high value, such as 100 percent, indicates that the AdvFS I/O subsystem is highly efficient. A low value indicates that files may be fragmented. See showfile(8) for more information about the command output.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.7.5    Using showfsets to Display AdvFS Filesets in a File Domain

The showfsets command displays the AdvFS filesets (or clone filesets) and their characteristics in a specified domain.

The following is an example of the showfsets command:

# showfsets dmn
 
mnt
          Id           : 2c73e2f9.000f143a.1.8001
          Clone is     : mnt_clone
          Files        :       79,  limit =     1000
          Blocks  (1k) :      331,  limit =    25000
          Quota Status : user=on  group=on
 
mnt_clone
          Id           : 2c73e2f9.000f143a.2.8001
          Clone of     : mnt
          Revision     : 1

See showfsets(8) for information about the options and output of the command.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8    Gathering LSM Information

Use the following commands to gather information about a Logical Storage Manager (LSM) configuration:

The following sections describe these commands in detail.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8.1    Using volprint to Display LSM Configuration Information

The volprint command displays information from records in the LSM configuration database. You can select the records to be displayed by name or by using special search expressions. In addition, you can display record association hierarchies, so that the structure of records is more apparent.

Use the volprint command to display disk group, disk media, volume, plex, and subdisk records. Use the voldisk list to display disk access records, or physical disk information.

The following example uses the volprint command to show the status of the voldev1 volume:

# volprint -ht voldev1
DG NAME        GROUP-ID
DM NAME        DEVICE       TYPE     PRIVLEN  PUBLEN   PUBPATH
V  NAME        USETYPE      KSTATE   STATE    LENGTH   READPOL  PREFPLEX
PL NAME        VOLUME       KSTATE   STATE    LENGTH   LAYOUT   ST-WIDTH MODE
SD NAME        PLEX         PLOFFS   DISKOFFS LENGTH   DISK-NAME    DEVICE
 
v  voldev1     fsgen        ENABLED  ACTIVE   804512   SELECT   -
pl voldev1-01  voldev1      ENABLED  ACTIVE   804512   CONCAT   -        RW
sd rz8-01      voldev1-01   0        0        804512   rz8          rz8
pl voldev1-02  voldev1      ENABLED  ACTIVE   804512   CONCAT   -        RW
sd dev1-01     voldev1-02   0        2295277  402256   dev1         rz9
sd rz15-02     voldev1-02   402256   2295277  402256   rz15         rz15

See volprint(8) for more information about command options and output.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8.2    Using volstat to Display LSM Performance Information

The volstat command provides information about activity on volumes, plexes, subdisks, and disks under LSM control. It reports statistics that reflect the activity levels of LSM objects since boot time.

The amount of information displayed depends on which options you specify to volstat. For example, you can display statistics for a specific LSM object, or you can display statistics for all objects at one time. If you specify a disk group, only statistics for objects in that disk group are displayed. If you do not specify a particular disk group, volstat displays statistics for the default disk group (rootdg).

You can also use the volstat command to reset the statistics information to zero. This can be done for all objects or for only specified objects. Resetting the information to zero before a particular operation makes it possible to measure the subsequent impact of that particular operation.

The following example uses the volstat command to display statistics on LSM volumes:

# volstat
OPERATIONS       BLOCKS        AVG TIME(ms)
TYP NAME        READ   WRITE    READ    WRITE   READ   WRITE
vol archive      865     807    5722     3809   32.5    24.0
vol home        2980    5287    6504    10550   37.7   221.1
vol local      49477   49230  507892   204975   28.5    33.5
vol src        79174   23603  425472   139302   22.4    30.9
vol swapvol    22751   32364  182001   258905   25.3   323.2

See volstat(8) for more information about command output.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8.3    Using voltrace to Display LSM I/O Operation Information

The voltrace command reads an event log (/dev/volevent) and prints formatted event log records to standard output. Using voltrace, you can set event trace masks to determine which type of events will be tracked. For example, you can trace I/O events, configuration changes, or I/O errors.

The following example uses the voltrace command to display status on all new events:

# voltrace -n -e all
18446744072623507277 IOTRACE 439: req 3987131 v:rootvol p:rootvol-01 \
  d:root_domain s:rz3-02 iot write lb 0 b 63120 len 8192 tm 12
18446744072623507277 IOTRACE 440: req 3987131 \
  v:rootvol iot write lb 0 b 63136 len 8192 tm 12

See voltrace(8) for more information about command options and output.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8.4    Using volwatch to Monitor LSM Failures

The volwatch shell script is automatically started when you install LSM. This script sends mail to root if certain LSM configuration events occur, such as a plex detach caused by a disk failure. The script sends mail to root by default. You also can specify another mail recipient.

See volwatch(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.8.5    Using dxlsm to Display LSM Configuration Information

The LSM graphical user interface (GUI), dxlsm, includes an Analyze menu that allows you to display statistics about volumes, LSM disks, and subdisks. The information is displayed graphically, using colors and patterns on the disk icons, and numerically, using the Analysis Statistics form. You can use the Analysis Parameters form to customize the displayed information.

See the Logical Storage Manager manual and dxlsm(8X) for more information about dxlsm.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.9    Gathering Network Information

Use the following commands to gather network performance information:

The following sections describe these commands in detail.

In addition, you can use the following commands to obtain network information:


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.9.1    Using netstat to Display Network Information

To check network statistics, use the netstat command. Some problems to look for are as follows:

Most of the information provided by netstat is used to diagnose network hardware or software failures, not to analyze tuning opportunities. See the Network Administration manual for more information on how to diagnose failures.

The following example shows the output produced by the netstat -i command:

# netstat -i
Name  Mtu   Network     Address         Ipkts Ierrs    Opkts Oerrs  Coll
ln0   1500  DLI         none           133194     2    23632     4  4881
ln0   1500  <Link>                     133194     2    23632     4  4881
ln0   1500  red-net     node1          133194     2    23632     4  4881
sl0*  296   <Link>                          0     0        0     0     0
sl1*  296   <Link>                          0     0        0     0     0
lo0   1536  <Link>                        580     0      580     0     0
lo0   1536  loop        localhost         580     0      580     0     0
 

Use the following netstat command to determine the causes of the input (Ierrs) and output (Oerrs) shown in the preceding example:

# netstat -is
 
ln0 Ethernet counters at Fri Jan 14 16:57:36 1996
 
        4112 seconds since last zeroed
    30307093 bytes received
     3722308 bytes sent
      133245 data blocks received
       23643 data blocks sent
    14956647 multicast bytes received
      102675 multicast blocks received
       18066 multicast bytes sent
         309 multicast blocks sent
        3446 blocks sent, initially deferred
        1130 blocks sent, single collision
        1876 blocks sent, multiple collisions
           4 send failures, reasons include:
                Excessive collisions
           0 collision detect check failure
           2 receive failures, reasons include:
                Block check error
                Framing Error
           0 unrecognized frame destination
           0 data overruns
           0 system buffer unavailable
           0 user buffer unavailable

The netstat-s command displays the following statistics for each protocol:

# netstat -s
ip:
        67673 total packets received
        0 bad header checksums
        0 with size smaller than minimum
        0 with data size < data length
        0 with header length < data size
        0 with data length < header length
        8616 fragments received
        0 fragments dropped (dup or out of space)
        5 fragments dropped after timeout
        0 packets forwarded
        8 packets not forwardable
        0 redirects sent
icmp:
        27 calls to icmp_error
        0 errors not generated  old message was icmp
        Output histogram:
                echo reply: 8
                destination unreachable: 27
        0 messages with bad code fields
        0 messages < minimum length
        0 bad checksums
        0 messages with bad length
        Input histogram:
                echo reply: 1
                destination unreachable: 4
                echo: 8
        8 message responses generated
igmp:
        365 messages received
        0 messages received with too few bytes
        0 messages received with bad checksum
        365 membership queries received
        0 membership queries received with invalid field(s)
        0 membership reports received
        0 membership reports received with invalid field(s)
        0 membership reports received for groups to which we belong
        0 membership reports sent
tcp:
        11219 packets sent
                7265 data packets (139886 bytes)
                4 data packets (15 bytes) retransmitted
                3353 ack-only packets (2842 delayed)
                0 URG only packets
                14 window probe packets
                526 window update packets
                57 control packets
        12158 packets received
                7206 acks (for 139930 bytes)
                32 duplicate acks
                0 acks for unsent data
                8815 packets (1612505 bytes) received in-sequence
                432 completely duplicate packets (435 bytes)
                0 packets with some dup. data (0 bytes duped)
                14 out-of-order packets (0 bytes)
                1 packet (0 bytes) of data after window
                0 window probes
                1 window update packet
                5 packets received after close
                0 discarded for bad checksums
                0 discarded for bad header offset fields
                0 discarded because packet too short
        19 connection requests
        25 connection accepts
        44 connections established (including accepts)
        47 connections closed (including 0 drops)
        3 embryonic connections dropped
        7217 segments updated rtt (of 7222 attempts)
        4 retransmit timeouts
                0 connections dropped by rexmit timeout
        0 persist timeouts
        0 keepalive timeouts
                0 keepalive probes sent
                0 connections dropped by keepalive
udp:
        12003 packets sent
        48193 packets received
        0 incomplete headers
        0 bad data length fields
        0 bad checksums
        0 full sockets
        12943 for no port (12916 broadcasts, 0 multicasts)
 

See netstat(1) for information about the output produced by the various options supported by the netstat command.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.9.2    Using nfsstat to Display Network and NFS Information

The nfsstat command displays statistical information about the Network File System (NFS) and Remote Procedure Call (RPC) interfaces in the kernel. You can also use this command to reinitialize the statistics.

An example of the nfsstat command is as follows:

# nfsstat
 
Server rpc:
calls     badcalls  nullrecv   badlen   xdrcall
38903     0         0          0        0
 
Server nfs:
calls     badcalls
38903     0
 
Server nfs V2:
null      getattr   setattr    root     lookup     readlink   read
5  0%     3345  8%  61  0%     0  0%    5902 15%   250  0%    1497  3%
wrcache   write     create     remove   rename     link       symlink
0  0%     1400  3%  549  1%    1049  2% 352  0%    250  0%    250  0%
mkdir     rmdir     readdir    statfs
171  0%   172  0%   689  1%    1751  4%
 
Server nfs V3:
null      getattr   setattr    lookup    access    readlink   read
0  0%     1333  3%  1019  2%   5196 13%  238  0%   400  1%    2816  7%
write     create    mkdir      symlink   mknod     remove     rmdir
2560  6%  752  1%   140  0%    400  1%   0  0%     1352  3%   140  0%
rename    link      readdir    readdir+  fsstat    fsinfo     pathconf
200  0%   200  0%   936  2%    0  0%     3504  9%  3  0%      0  0%
commit
21  0%
 
Client rpc:
calls     badcalls  retrans    badxid    timeout   wait       newcred
27989     1         0          0         1         0          0
badverfs  timers
0         4
 
Client nfs:
calls     badcalls  nclget     nclsleep
27988     0         27988      0
 
Client nfs V2:
null      getattr   setattr    root      lookup    readlink   read
0  0%     3414 12%  61  0%     0  0%     5973 21%  257  0%    1503  5%
wrcache   write     create     remove    rename    link       symlink
0  0%     1400  5%  549  1%    1049  3%  352  1%   250  0%    250  0%
mkdir     rmdir     readdir    statfs
171  0%   171  0%   713  2%    1756  6%
 
Client nfs V3:
null      getattr   setattr    lookup    access    readlink   read
0  0%     666  2%   9  0%      2598  9%  137  0%   200  0%    1408  5%
write     create    mkdir      symlink   mknod     remove     rmdir
1280  4%  376  1%   70  0%     200  0%   0  0%     676  2%    70  0%
rename    link      readdir    readdir+  fsstat    fsinfo     pathconf
100  0%   100  0%   468  1%    0  0%     1750  6%  1  0%      0  0%
commit
10  0%
# 
 

The ratio of timeouts to calls (which should not exceed 1 percent) is the most important thing to look for in the NFS statistics. A timeout-to-call ratio greater than 1 percent can have a significant negative impact on performance. See Chapter 6 for information on how to tune your system to avoid timeouts.

If you are attempting to monitor an experimental situation with nfsstat, reset the NFS counters to zero before you begin the experiment. Use the nfsstat -z command to clear the counters. See nfsstat(8) for more information about command options and output.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.9.3    Checking Socket Listen Queue Statistics With sysconfig

You can determine whether you need to increase the socket listen queue limit by using the sysconfig -q socket command to display the values of the following attributes:

See Chapter 6 for information on tuning socket listen queue limits.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.9.4    Using ps to Display Idle Thread Information

On a client system, the nfsiod daemons spawn several I/O threads to service asynchronous I/O requests to the server. The I/O threads improve the performance of both NFS reads and writes. The optimum number of I/O threads depends on many variables, such as how quickly the client will be writing, how many files will be accessed simultaneously, and the characteristics of the NFS server. For most clients, seven threads are sufficient.

The following example uses the ps axlmp command to display idle I/O threads on a client system:

# 
ps axlmp 0 | grep nfs
 
 0  42   0            nfsiod_  S                 0:00.52                 
 0  42   0            nfsiod_  S                 0:01.18                 
 0  42   0            nfsiod_  S                 0:00.36                 
 0  44   0            nfsiod_  S                 0:00.87                 
 0  42   0            nfsiod_  S                 0:00.52                 
 0  42   0            nfsiod_  S                 0:00.45                 
 0  42   0            nfsiod_  S                 0:00.74                 
 
# 

The previous output shows a sufficient number of sleeping threads. If your output shows that few threads are sleeping, you may be able to improve NFS performance by increasing the number of threads. See Chapter 6, nfsiod(8), and nfsd(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.10    Gathering Profiling and Debugging Information

For information about the application and kernel profiling and debugging tools that are described in Table 2-7, see the Programmer's Guide, the Kernel Debugging manual, and the reference pages associated with the tools.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.11    Modifying the Kernel

Kernel variables, including system attributes and parameters, determine the behavior of the DIGITAL UNIX operating system and subsystems. When you install the operating system or add optional subsystems, the kernel variables are set to their default values. Modifying the values of certain kernel variables may improve system performance. Some kernel variables are used only to monitor the current state of the system.

You can display and modify kernel variable values by using various methods. You can modify some variables by using all methods, but in some cases, you must use a particular method to modify a variable.

Because you can use various methods to assign values to kernel variables, the system uses the following hierarchy to determine which value to use:

The following sections describe how to display and modify kernel variables, attributes, and parameters. See the System Administration manual for detailed information about kernel variables, attributes, and parameters.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.11.1    Using dbx to Display and Modify Run-Time Kernel Variables

Use the dbx command to examine source files, control program execution, display the state of the program, and debug at the machine-code level. To examine the values of kernel variables and data structures, use the dbx print command and specify the data structure or variable to examine.

An example of the dbx print command is as follows:

# dbx -k /vmunix /dev/mem 
 
(dbx)  print vm_page_free_count
248
(dbx)

# dbx -k /vmunix /dev/mem 
 
(dbx)  print somaxconn
1024
(dbx)

# dbx -k /vmunix /dev/mem 
 
(dbx)  print vm_perfsum
struct {
    vpf_pagefaults = 1689166
    vpf_kpagefaults = 13690
    vpf_cowfaults = 478504
    vpf_cowsteals = 638970
    vpf_zfod = 255372
    vpf_kzfod = 13654
    vpf_pgiowrites = 3902
 

.
.
.
  vpf_vmwiredpages = 440 vpf_ubcwiredpages = 0 vpf_mallocpages = 897 vpf_totalptepages = 226 vpf_contigpages = 3 vpf_rmwiredpages = 0 vpf_ubcpages = 2995 vpf_freepages = 265 vpf_vmcleanpages = 237 vpf_swapspace = 7806 } (dbx)

Use the dbx patch command to modify the run-time values of some kernel variables. Note that the values you assign by using the dbx patch command are temporary and are lost when you rebuild the kernel.

An example of the dbx patch command is as follows:

# dbx -k /vmunix /dev/mem 
 
(dbx)  patch somaxconn = 32767
32767
(dbx)

To ensure that the system is utilizing a new kernel variable value, reboot the system. See the Programmer's Guide for detailed information about the dbx debugger.

You can also use the dbx assign command to modify run-time kernel variable values. However, the modifications are lost when you reboot the system.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.11.2    Using the Kernel Tuner to Display and Modify Attributes

Use the Kernel Tuner (dxkerneltuner), provided by the Common Desktop Environment's (CDE) graphical user interface, to display the current and permanent values for attributes, modify the run-time values (if supported), and modify the permanent values.

To access the Kernel Tuner, click on the Application Manager icon in the CDE menu bar, select System_Admin, and then select MonitoringTuning. You can then click on Kernel Tuner. A pop-up menu containing a list of subsystems appears, allowing you to select a subsystem and generate a display of the subsystem's attributes and values.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.11.3    Using the sysconfig Command to Display and Modify Run-Time Attributes

Use the sysconfig command to display the configured subsystems, attribute values, and other attribute information. The command also allows you to modify the run-time values of attributes that support this feature.

Use the sysconfig -s command to list the subsystems that are configured in your system. An example of the sysconfig -s command is as follows:

# sysconfig -s
Cm: loaded and configured
Generic: loaded and configured
Proc: loaded and configured

.
.
.
Xpr: loaded and configured Rt: loaded and configured Net: loaded and configured #

Use the sysconfig -q command and specify a subsystem to display the run-time values of the subsystem attributes. An example of the sysconfig -q command is as follows:

# sysconfig -q vfs
vfs:
name-cache-size = 32768
name-cache-hash-size = 1024
buffer-hash-size = 512

.
.
.
max-ufs-mounts = 1000 vnode-deallocation-enable = 1 pipe-maxbuf-size = 65536 pipe-single-write-max = -1 pipe-databuf-size = 8192 pipe-max-bytes-all-pipes = 81920000 noadd-exec-access = 0 #

If an attribute is not defined in the sysconfigtab database file, the sysconfig -q command returns the default value of attribute.

To display the minimum and maximum values for an attribute, use the sysconfig -Q command and specify the subsystem. An example of the sysconfig -Q command is as follows:

# sysconfig -Q ufs
ufs:
inode-hash-size - type=INT op=CQ min_val=0 max_val=2147483647
create-fastlinks - type=INT op=CQ min_val=0 max_val=2147483647
ufs-blkpref-lookbehind - type=INT op=CQ min_val=0 max_val=2147483647
nmount - type=INT op=CQ min_val=0 max_val=2147483647
#

To modify the run-time value of an attribute, use the sysconfig -r command and specify the subsystem, the attribute, and the attribute value. Only some attributes support run-time modifications. An example of the sysconfig -r command is as follows:

# sysconfig -r socket somaxconn=1024
somaxconn: reconfigured
#

See the System Administration manual and sysconfig(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

2.11.4    Using the sysconfigdb Command to Modify Attributes

Use the sysconfigdb command to assign new values to attributes in the sysconfigtab database file. Do not manually edit the sysconfigtab database file.

After you use the sysconfigdb command, reboot the system or invoke the sysconfig -r command to use the new attribute values.

See the System Administration manual and sysconfigdb(8) for more information.


[Contents] [Prev. Chapter] [Prev. Section] [Next Chapter] [Index] [Help]

2.11.5    Modifying Parameters in the System Configuration File

Use the /usr/sys/conf/SYSTEM configuration file to specify values for kernel parameters. You can edit the file to modify the values currently assigned to the parameters or to add parameters. You must rebuild the kernel and reboot the system to use the new parameter values.

Some kernel attributes have corresponding kernel parameters, but the values permanently assigned to attributes supersede the values permanently assigned to their corresponding parameters in the system configuration file. If possible, always modify an attribute instead of its corresponding parameter.

See the System Administration manual for descriptions of some parameters and information about modifying the system configuration file and rebuilding the kernel. See Appendix B for a list of attributes that have corresponding parameters.


[Contents] [Prev. Chapter] [Prev. Section] [Next Chapter] [Index] [Help]