6    Managing Memory Performance

You may be able to improve Tru64 UNIX performance by optimizing your memory resources. Usually, the best way to improve performance is to eliminate or reduce paging and swapping. This can be done by increasing memory resources.

This chapter describes:

6.1    Virtual Memory Operation

The operating system allocates physical memory in 8-KB units called pages. The virtual memory subsystem tracks and manages all the physical pages in the system and efficiently distributes the pages among three areas:

You must understand memory operation to determine which tuning guidelines will improve performance for your workload. The following sections describe how the virtual memory subsystem:

6.1.1    Physical Page Tracking

The virtual memory subsystem tracks all the physical memory pages in the system. Page lists are used to identify the location and age of each page. The oldest pages are the first to be reclaimed. At any one time, each physical page can be found on one of the following lists:

Use the vmstat command to determine the number of pages that are on the page lists. Remember that pages on the active list (the act field in the vmstat output) include both inactive and UBC LRU pages.

6.1.2    File System Buffer Cache Memory Allocation

The operating system uses caches to store file system user data and metadata. If the cached data is later reused, a disk I/O operation is avoided, which improves performance. This is because data can be retrieved from memory faster than a disk I/O operation.

The following sections describe these file system caches:

6.1.2.1    Metadata Buffer Cache Memory Allocation

At boot time, the kernel allocates wired memory for the metadata buffer cache. The cache acts as a layer between the operating system and disk by storing recently accessed UFS and CDFS metadata, which includes file header information, superblocks, inodes, indirect blocks, directory blocks, and cylinder group summaries. Performance is improved if the data is later reused and a disk operation is avoided.

The metadata buffer cache uses bcopy routines to move data in and out of memory. Memory in the metadata buffer cache is not subject to page reclamation.

The size of the metadata buffer cache is specified by the value of the vfs subsystem bufcache attribute. See Section 9.1.3 for information on tuning the bufcache attribute.

6.1.2.2    Unified Buffer Cache Memory Allocation

The physical memory that is not wired is available to processes and to the Unified Buffer Cache (UBC), which compete for this memory.

The UBC functions as a layer between the operating system and disk by storing recently accessed file system data for reads and writes from conventional file activity and holding page faults from mapped file sections. UFS caches user and application data in the UBC. AdvFS caches user and application data and metadata in the UBC. File system performance is improved if the data and metadata is reused and in the UBC.

Figure 6-1 shows how the memory subsystem allocates physical memory to the UBC and for processes.

Figure 6-1:  UBC Memory Allocation

At any one time, the amount of memory allocated to the UBC and to processes depends on file system and process demands. For example, if file system activity is heavy and process demand is low, most of the pages will be allocated to the UBC, as shown in Figure 6-2.

Figure 6-2:  Memory Allocation During High File System Activity and No Paging Activity

In contrast, heavy process activity, such as large increases in the working sets for large executables, will cause the memory subsystem to reclaim UBC borrowed pages, down to the value of the ubc_borrowpercent attribute, as shown in Figure 6-3.

Figure 6-3:  Memory Allocation During Low File System Activity and High Paging Activity

The size of the UBC is specified by the value of the vfs subsystem UBC related attributes. See Section 9.1.2 for information on tuning the UBC related attribute.

6.1.3    Process Memory Allocation

Physical memory that is not wired is available to processes and the UBC, which compete for this memory. The virtual memory subsystem allocates memory resources to processes and to the UBC according to the demand, and reclaims the oldest pages if the demand depletes the number of available free pages.

The following sections describe how the virtual memory subsystem allocates memory to processes.

6.1.3.1    Process Virtual Address Space Allocation

The fork system call creates new processes. When you invoke a process, the fork system call:

  1. Creates a UNIX process body, which includes a set of data structures that the kernel uses to track the process and a set of resource limitations. See fork(2) for more information.

  2. Establishes a contiguous block of virtual address space for the process. Virtual address space is the array of virtual pages that the process can use to map into actual physical memory. Virtual address space is used for anonymous memory (memory that holds data elements and structures that are modified during process execution) and for file-backed memory (memory used for program text or shared libraries).

    Because physical memory is limited, a process' entire virtual address space cannot be in physical memory at one time. However, a process can execute when only a portion of its virtual address space (its working set) is mapped to physical memory. Pages of anonymous memory and file-backed memory are paged in only when needed. If the memory demand increases and pages must be reclaimed, the pages of anonymous memory are paged out and their contents moved to swap space, while the pages of file-backed memory are simply released.

  3. Creates one or more threads of execution. The default is one thread for each process. Multiprocessing systems support multiple process threads.

Although the virtual memory subsystem allocates a large amount of virtual address space for each process, it uses only part of this space. Only 4 TB is allocated for user space. User space is generally private and maps to a nonshared physical page. An additional 4 TB of virtual address space is used for kernel space. Kernel space usually maps to shared physical pages. The remaining space is not used for any purpose.

Figure 6-4 shows the use of process virtual address space.

Figure 6-4:  Virtual Address Space Usage

6.1.3.2    Virtual Address to Physical Address Translation

When a virtual page is touched (accessed), the virtual memory subsystem must locate the physical page and then translate the virtual address into a physical address. Each process has a page table, which is an array containing an entry for each current virtual-to-physical address translation. Page table entries have a direct relation to virtual pages (that is, virtual address 1 corresponds to page table entry 1) and contain a pointer to the physical page and protection information.

Figure 6-5 shows the translation of a virtual address into a physical address.

Figure 6-5:  Virtual-to-Physical Address Translation

A process resident set is the complete set of all the virtual addresses that have been mapped to physical addresses (that is, all the pages that have been accessed during process execution). Resident set pages may be shared among multiple processes.

A process working set is the set of virtual addresses that are currently mapped to physical addresses. The working set is a subset of the resident set, and it represents a snapshot of the process resident set at one point in time.

6.1.3.3    Page Faults

When an anonymous (nonfile-backed) virtual address is requested, the virtual memory subsystem must locate the physical page and make it available to the process. This occurs at different speeds, depending on whether the page is in memory or on disk (see Figure 1-1).

If a requested address is currently being used (that is, the address is in the active page list), it will have an entry in the page table. In this case, the PAL code loads the physical address into the translation lookaside buffer, which then passes the address to the CPU. Because this is a memory operation, it occurs quickly.

If a requested address is not active in the page table, the PAL lookup code issues a page fault, which instructs the virtual memory subsystem to locate the page and make the virtual-to-physical address translation in the page table.

There are four different types of page faults:

The virtual memory subsystem uses the following techniques to improve process execution time and decrease the number of page faults:

6.1.4    Page Reclamation

Because memory resources are limited, the virtual memory subsystem must periodically reclaim pages. The free page list contains clean pages that are available to processes and the UBC. As the demand for memory increases, the list may become depleted. If the number of pages falls below a tunable limit, the virtual memory subsystem will replenish the free list by reclaiming the least-recently used pages from processes and the UBC.

To reclaim pages, the virtual memory subsystem:

  1. Prewrites modified pages to swap space in an attempt to forestall a memory shortage. See Section 6.1.4.1 for more information.

  2. Begins paging if the demand for memory is not satisfied, as follows:

    1. Reclaims pages that the UBC has borrowed and puts them on the free list.

    2. Reclaims the oldest inactive and UBC LRU pages from the active page list, moves the contents of the modified pages to swap space or disk, and puts the clean pages on the free list.

    3. If needed, more aggressively reclaims pages from the active list.

    See Section 6.1.4.2 for more information about reclaiming memory by paging.

  3. Begins swapping if the demand for memory is not met. The virtual memory subsystem temporarily suspends processes and moves entire resident sets to swap space, which frees large numbers of pages. See Section 6.1.4.3 for information about swapping.

The point at which paging and swapping start and stop depends on the values of some vm subsystem attributes. Figure 6-6 shows some of the attributes that control paging and swapping.

Figure 6-6:  Paging and Swapping Attributes

Detailed descriptions of the attributes are as follows:

See Section 6.5 for information about modifying paging and swapping attributes.

The following sections describe the page reclamation procedure in detail.

6.1.4.1    Modified Page Prewriting

The virtual memory subsystem attempts to prevent memory shortages by prewriting modified inactive and UBC LRU pages to disk. To reclaim a page that has been prewritten, the virtual memory subsystem only needs to validate the page, which can improve performance. See Section 6.1.1 for information about page lists.

When the virtual memory subsystem anticipates that the pages on the free list will soon be depleted, it prewrites to disk the oldest modified (dirty) pages that are currently being used by processes or the UBC.

The value of the vm subsystem attribute vm_page_prewrite_target determines the number of inactive pages that the subsystem will prewrite and keep clean. The default value is vm_page_free_target * 2.

The vm_ubcdirtypercent attribute specifies the modified UBC LRU page threshold. When the number of modified UBC LRU pages is more than this value, the virtual memory subsystem prewrites to disk the oldest modified UBC LRU pages. The default value of the vm_ubcdirtypercent attribute is 10 percent of the total UBC LRU pages.

In addition, the sync function periodically flushes (writes to disk) system metadata and data from all unwritten memory buffers. For example, the data that is flushed includes, for UFS, modified inodes and delayed block I/O. Commands, such as the shutdown command, also issue their own sync functions. To minimize the impact of I/O spikes caused by the sync function, the value of the vm subsystem attribute ubc_maxdirtywrites specifies the maximum number of disk writes that the kernel can perform each second. The default value is five I/O operations per second.

6.1.4.2    Reclaiming Memory by Paging

When the memory demand is high and the number of pages on the free page list falls below the value of the vm subsystem attribute vm_page_free_target, the virtual memory subsystem uses paging to replenish the free page list. The page-out daemon and task swapper daemon are extensions of the page reclamation code, which controls paging and swapping.

The paging process is as follows:

  1. The page reclamation code activates the page-stealer daemon, which first reclaims the clean pages that the UBC has borrowed from the virtual memory subsystem, until the size of the UBC reaches the borrowing threshold that is specified by the value of the ubc_borrowpercent attribute (the default is 20 percent). Freeing borrowed UBC pages is a fast way to reclaim pages, because UBC pages are usually not modified. If the reclaimed pages are dirty (modified), their contents must be written to disk before the pages can be moved to the free page list.

  2. If freeing clean UBC borrowed memory does not sufficiently replenish the free list, a page out occurs. The page-stealer daemon reclaims the oldest inactive and UBC LRU pages from the active page list, moves the contents of the modified pages to disk, and puts the clean pages on the free list.

  3. Paging becomes increasingly aggressive if the number of free pages continues to decrease. If the number of pages on the free page list falls below the value of the vm subsystem attribute vm_page_free_min (the default is 20 pages), a page must be reclaimed for each page taken from the list.

Figure 6-7 shows the movement of pages during paging operations.

Figure 6-7:  Paging Operation

Paging stops when the number of pages on the free list increases to the limit specified by the vm subsystem attribute vm_page_free_target. However, if paging individual pages does not sufficiently replenish the free list, swapping is used to free a large amount of memory (see Section 6.1.4.3).

6.1.4.3    Reclaiming Memory by Swapping

If there is a continuously high demand for memory, the virtual memory subsystem may be unable to replenish the free page list by reclaiming single pages. To dramatically increase the number of clean pages, the virtual memory subsystem uses swapping to suspend processes, which reduces the demand for physical memory.

The task swapper will swap out a process by suspending the process, writing its resident set to swap space, and moving the clean pages to the free page list. Swapping has a serious impact on system performance because a swapped out process cannot execute, and should be avoided on VLM systems and systems running large programs.

The point at which swapping begins and ends is controlled by a number of vm subsystem attributes, as follows:

You may be able to improve system performance by modifying the attributes that control when swapping begins and ends, as described in Section 6.5. Large-memory systems or systems running large programs should avoid paging and swapping, if possible.

Increasing the rate of swapping (swapping earlier during page reclamation) may increase throughput. As more processes are swapped out, fewer processes are actually executing and more work is done. Although increasing the rate of swapping moves long-sleeping threads out of memory and frees memory, it may degrade interactive response time because when an outswapped process is needed, it will have a long latency period.

Decreasing the rate of swapping (by swapping later during page reclamation) may improve interactive response time, but at the cost of throughput. See Section 6.5.2 for more information about changing the rate of swapping.

To facilitate the movement of data between memory and disk, the virtual memory subsystem uses synchronous and asynchronous swap buffers. The virtual memory subsystem uses these two types of buffers to immediately satisfy a page-in request without having to wait for the completion of a page-out request, which is a relatively slow process.

Synchronous swap buffers are used for page-in page faults and for swap outs. Asynchronous swap buffers are used for asynchronous page outs and for prewriting modified pages. See Section 6.5.7 for swap buffer tuning information.

6.2    Configuring Swap Space for High Performance

Use the swapon command to display swap space, and to configure additional swap space after system installation. To make this additional swap space permanent, use the vm subsystem attribute swapdevice to specify swap devices in the /etc/sysconfigtab file. For example:

vm:
     swapdevice=/dev/disk/dsk0b,/dev/disk/dsk0d

See Section 3.6 for information about modifying kernel subsystem attributes.

See Section 2.3.2.2 and Section 2.3.2.3 for information about swap space allocation modes and swap space requirements.

The following list describes how to configure swap space for high performance:

See the System Administration manual for more information about adding swap devices. See Chapter 8 for more information about configuring and tuning disks for high performance and availability.

6.3    Displaying Memory Information

Table 6-2 describes the tools that you can use to display memory usage information.

Table 6-2:  Tools to Display Virtual Memory and UBC

Name Use Description

sys_check

Analyzes system configuration and displays statistics (Section 4.3)

Creates an HTML file that describes the system configuration, and can be used to diagnose problems. The sys_check utility checks kernel variable settings and memory and CPU resources, and provides performance data and lock statistics for SMP systems and kernel profiles.

The sys_check utility calls various commands and utilities to perform a basic analysis of your configuration and kernel variable settings, and provides warnings and tuning guidelines if necessary. See sys_check(8) for more information.

uerf

Displays total system memory

Use the uerf -r 300 command to determine the amount of memory on your system. The beginning of the listing shows the total amount of physical memory (including wired memory) and the amount of available memory. See uerf(8) for more information.

vmstat

Displays virtual memory and CPU usage statistics (Section 6.3.1)

Displays information about process threads, virtual memory usage (page lists, page faults, page ins, and page outs), interrupts, and CPU usage (percentages of user, system and idle times). First reported are the statistics since boot time; subsequent reports are the statistics since a specified interval of time.

ps

Displays CPU and virtual memory usage by processes (Section 6.3.2)

Displays current statistics for running processes, including CPU usage, the processor and processor set, and the scheduling priority.

The ps command also displays virtual memory statistics for a process, including the number of page faults, page reclamations, and page ins; the percentage of real memory (resident set) usage; the resident set size; and the virtual address size.

ipcs

Displays IPC statistics

Displays interprocess communication (IPC) statistics for currently active message queues, shared-memory segments, semaphores, remote queues, and local queue headers.

The information provided in the following fields reported by the ipcs -a command can be especially useful: QNUM, CBYTES, QBYTES, SEGSZ, and NSEMS. See ipcs(1) for more information.

swapon

Displays information about swap space utilization (Section 6.3.3)

Displays the total amount of allocated swap space, swap space in use, and free swap space for each swap device. You can also use the swapon command to allocate additional swap space.

dbx

Reports UBC statistics (Section 6.3.4)

You can check the UBC by using the dbx print command to examine the ufs_getapage_stats data structure, which contains information about UBC page usage.

The following sections describe some of these tools in detail.

6.3.1    Displaying Memory by Using the vmstat Command

To display the virtual memory, process, and CPU statistics, enter:

# /usr/ucb/vmstat

For example, to display statistics

Information similar to the following is displayed:

Virtual Memory Statistics: (pagesize = 8192)
procs        memory            pages                       intr        cpu
r  w  u  act  free wire  fault cow zero react pin pout   in  sy  cs  us sy  id
2 66 25  6417 3497 1570  155K  38K  50K    0  46K    0    4 290 165   0  2  98
4 65 24  6421 3493 1570   120    9   81    0    8    0  585 865 335  37 16  48
2 66 25  6421 3493 1570    69    0   69    0    0    0  570 968 368   8 22  69
4 65 24  6421 3493 1570    69    0   69    0    0    0  554 768 370   2 14  84
4 65 24  6421 3493 1570    69    0   69    0    0    0  865  1K 404   4 20  76
  [1]           [2]            [3]                        [4]         [5]
 
 

The first line of the vmstat output is for all time since a reboot, and each subsequent report is for the last interval.

The vmstat command includes information that you can use to diagnose CPU and virtual memory problems. Examine the following fields:

  1. Process information (procs):

    [Return to example]

  2. Virtual memory information (memory):

    See Section 6.1.1 for more information on page lists. [Return to example]

  3. Paging information (pages):

    [Return to example]

  4. Interrupt information (intr):

    [Return to example]

  5. CPU usage information (cpu):

    See Section 7.1.2 for information about using the vmstat command to monitor CPU usage. [Return to example]

To use the vmstat command to diagnose a memory performance problem:

Excessive paging also can increase the miss rate for the secondary cache, and may be indicated by the following output:

To display statistics about physical memory use, enter:

# vmstat -P

Information similar to the following is displayed:

Total Physical Memory =   512.00 M
                      =    65536 pages
Physical Memory Clusters:
 
 start_pfn     end_pfn        type  size_pages / size_bytes
         0         256         pal         256 /    2.00M
       256       65527          os       65271 /  509.93M
     65527       65536         pal           9 /   72.00k
 
Physical Memory Use:
 
 start_pfn     end_pfn        type  size_pages / size_bytes
       256         280   unixtable          24 /  192.00k
       280         287    scavenge           7 /   56.00k
       287         918        text         631 /    4.93M
       918        1046        data         128 /    1.00M
      1046        1209         bss         163 /    1.27M
      1210        1384      kdebug         174 /    1.36M
      1384        1390     cfgmgmt           6 /   48.00k
      1390        1392       locks           2 /   16.00k
      1392        1949   unixtable         557 /    4.35M
      1949        1962        pmap          13 /  104.00k
      1962        2972    vmtables        1010 /    7.89M
      2972       65527     managed       62555 /  488.71M
                             ============================
         Total Physical Memory Use:      65270 /  509.92M
 
Managed Pages Break Down:
 
       free pages = 1207
     active pages = 25817
   inactive pages = 20103
      wired pages = 15434
        ubc pages = 15992
        ==================
            Total = 78553
 
WIRED Pages Break Down:
 
   vm wired pages = 1448
  ubc wired pages = 4550
  meta data pages = 1958
     malloc pages = 5469
     contig pages = 159
    user ptepages = 1774
  kernel ptepages = 67
    free ptepages = 9
        ==================
            Total = 15434

See Section 6.4 for information about increasing memory resources.

6.3.2    Displaying Memory by Using the ps Command

To display the current state of the system processes and how they use memory, enter:

# /usr/ucb/ps aux

Information similar to the following is displayed:

USER  PID  %CPU %MEM   VSZ   RSS  TTY S    STARTED      TIME  COMMAND
chen  2225  5.0  0.3  1.35M  256K p9  U    13:24:58  0:00.36  cp /vmunix /tmp
root  2236  3.0  0.5  1.59M  456K p9  R  + 13:33:21  0:00.08  ps aux
sorn  2226  1.0  0.6  2.75M  552K p9  S  + 13:25:01  0:00.05  vi met.ps
root   347  1.0  4.0  9.58M  3.72 ??  S      Nov 07 01:26:44  /usr/bin/X11/X -a
root  1905  1.0  1.1  6.10M  1.01 ??  R    16:55:16  0:24.79  /usr/bin/X11/dxpa
mat   2228  0.0  0.5  1.82M  504K p5  S  + 13:25:03  0:00.02  more
mat   2202  0.0  0.5  2.03M  456K p5  S    13:14:14  0:00.23  -csh (csh)
root     0  0.0 12.7   356M  11.9 ??  R <  Nov 07 3-17:26:13  [kernel idle]
             [1] [2]   [3] [4]      [5]               [6]     [7]
 
 

The ps command displays a snapshot of system processes in order of decreasing CPU usage, including the execution of the ps command itself. By the time the ps command executes, the state of system processes has probably changed.

The ps command output includes the following information that you can use to diagnose CPU and virtual memory problems:

  1. Percentage of CPU time usage (%CPU). [Return to example]

  2. Percentage of real memory usage (%MEM). [Return to example]

  3. Process virtual address size (VSZ)--This is the total amount of anonymous memory allocated to the process (in bytes). [Return to example]

  4. Real memory (resident set) size of the process (RSS)--This is the total amount of physical memory (in bytes) mapped to virtual pages (that is, the total amount of memory that the application has physically used). Shared memory is included in the resident set size figures; as a result, the total of these figures may exceed the total amount of physical memory available on the system. [Return to example]

  5. Process status or state (S) -- This specifies whether a process is in one of the following states:

    [Return to example]

  6. Current CPU time used (TIME), in the format hh:mm:ss.ms. [Return to example]

  7. The command that is running (COMMAND). [Return to example]

From the output of the ps command, you can determine which processes are consuming most of your system's CPU time and memory resources, and whether processes are swapped out. Concentrate on processes that are running or paging. Here are some concerns to keep in mind:

6.3.3    Displaying Swap Space Usage by Using the swapon Command

To display information about your swap device configuration, including the total amount of allocated swap space, the amount of swap space that is being used, and the amount of free swap space, enter:

# /usr/sbin/swapon -s

Infomation for each swap partition is displayed similar to the following:

Swap partition /dev/disk/dsk1b (default swap):     
    Allocated space:        16384 pages (128MB)     
    In-use space:           10452 pages ( 63%)     
    Free space:              5932 pages ( 36%)  
 
Swap partition /dev/disk/dsk4c:
    Allocated space:        128178 pages (1001MB)     
    In-use space:            10242 pages (  7%)     
    Free space:             117936 pages ( 92%)   
 
Total swap allocation:     
 
    Allocated space:        144562 pages (1.10GB)     
    Reserved space:          34253 pages ( 23%)     
    In-use space:            20694 pages ( 14%)     
    Available space:        110309 pages ( 76%)

You can configure swap space when you first install the operating system, or you can add swap space at a later date. Application messages, such as the following, usually indicate that not enough swap space is configured into the system or that a process limit has been reached:

"unable to obtain requested swap space"
"swap space below 10 percent free"

See Section 2.3.2.3 for information about swap space requirements. See Section 6.2 for information about adding swap space and distributing swap space for high performance.

6.3.4    Displaying the UBC by Using the dbx Debugger

If you have not disabled read-ahead, you can display the UBC by using the dbx print command to examine the ufs_getapage_stats data structure. For example:

# /usr/ucb/dbx -k /vmunix /dev/mem (dbx) print ufs_getapage_stats

Information similar to the following is displayed:

struct {
    read_looks = 2059022
    read_hits = 2022488
    read_miss = 36506
    alloc_error = 0
    alloc_in_cache = 0
}
(dbx)

To calculate the hit rate, divide the value of the read_hits field by the value of the read_looks field. A good hit rate is a rate above 95 percent. In the previous example, the hit rate is approximately 98 percent.

6.4    Tuning to Provide More Memory to Processes

If your system is paging or swapping, you may be able to increase the memory that is available to processes by tuning various kernel subsystem attributes.

Table 6-3 shows the guidelines for increasing memory resources to processes and lists the performance benefits as well as tradeoffs. Some of the guidelines for increasing the memory available to processes may affect UBC operation and file system caching. Adding physical memory to your system is the best way to stop paging or swapping.

Table 6-3:  Memory Resource Tuning Guidelines

Performance Benefit Guideline Tradeoff
Improve system response time when memory is low Decrease cache sizes (Section 9.1) May degrade file system performance
Decrease CPU load and demand for memory Reduce the number of processes running at the same time (Section 6.4.1) System performs less work
Free memory Reduce the static size of the kernel (Section 6.4.2) Not all functionality may be available
Free memory Reduce process memory requirements (Section 11.2.6) Program may not run optimally
Improve network throughput under a heavy load Increase the percentage of memory reserved for kernel malloc allocations (Section 6.4.3) Consumes memory

6.4.1    Reducing the Number of Processes Running Simultaneously

You can improve performance and reduce the demand for memory by running fewer applications simultaneously. Use the at or the batch command to run applications at offpeak hours.

See at(1) for more information.

6.4.2    Reducing the Static Size of the Kernel

You can reduce the static size of the kernel by deconfiguring any unnecessary subsystems. Use the sysconfig command to display the configured subsystems and to delete subsystems. Be sure not to remove any subsystems or functionality that is vital to your environment.

See Section 3.6 for information about modifying kernel subsystem attributes.

6.4.3    Increasing the Memory Reserved for Kernel malloc Allocations

If you are running a large Internet application, you may need to increase the amount of memory reserved for the kernel malloc subsystem. This improves network throughput by reducing the number of packets that are dropped while the system is under a heavy network load. However, increasing this value consumes memory.

Related Attribute

The generic subsystem attribute kmemreserve_percent specifies the percentage of physical memory reserved for kernel memory allocations that are less than or equal to the page size (8 KB).

Value: 1 to 75

Default: 0, which actually specifies 0.4 percent of available memory or 256 KB, whichever is smaller.

You can modify the kmemreserve_percent attribute without rebooting.

When to Tune

You might want to increase the value of the kmemreserve_percent attribute if the output of the netstat -d -i command shows dropped packets, or if the output of the vmstat -M command shows dropped packets under the fail_nowait heading. This may occur under a heavy network load.

See Section 3.6 for information about modifying kernel subsystem attributes.

6.5    Modifying Paging and Swapping Operation

You might improve performance by modifying paging and swapping operations that are described in the following sections:

6.5.1    Increasing the Paging Threshold

Paging is the transfer of program segments (pages) into and out of memory. Excessive paging is not desired. You can specify the number of pages on the free list before paging begins. See Section 6.1.4 for more information on paging.

Related Attribute

The vm subsystem attribute vm_page_free_target specifies the minimum number of pages on the free list before paging begins.

The default value of the vm_page_free_target attribute is based on the amount of memory in the system. Use the following table to determine the default value for your system:

Size of Memory Value of vm_page_free_target
Up to 512 MB 128
513 MB to 1024 MB 256
1025 MB to 2048 MB 512
2049 MB to 4096 MB 768
More than 4096 MB 1024

You can modify the vm_page_free_target attribute without rebooting the system.

When to Tune

Do not decrease the value of the vm_page_free_target attribute.

Do not increase the value of the vm_page_free_target attribute if the system is not paging. You might want to increase the value of the vm_page_free_target attribute if you have sufficient memory resources, and your system experiences performance problems when a severe memory shortage occurs. However, increasing might increase paging activity on a low-memory system and can waste memory if set too high. See Section 6.1.4 for information about paging and swapping attributes.

If you increase the default value of the vm_page_free_target attribute, you may also want to increase the value of the vm_page_free_min attribute.

See Section 3.6 for information about modifying kernel subsystem attributes.

6.5.2    Managing the Rate of Swapping

Swapping begins when the free page list falls below the swapping threshold. Excessive swapping is not desired. See Section 6.1.4 for more information on swapping. You can specify when swapping begins and ends.

Related Attribute

The following list describes the vm subsystem attributes that relate to modified page prewriting:

You can modify the vm_page_free_optimal, vm_page_free_min, and vm_page_free_target attributes without rebooting the system. See Section 3.6 for information about modifying kernel subsystem attributes.

When to Tune

Do not change the value of the vm_page_free_optimal attribute if the system is not paging.

Decreasing the value of the vm_page_free_optimal attribute improves interactive response time, but decreases throughput.

Increasing the value of the vm_page_free_optimal attribute moves long-sleeping threads out of memory, frees memory, and increases throughput. As more processes are swapped out, fewer processes are actually executing and more work is done. However, when an outswapped process is needed, it will have a long latency and might degrade interactive response time.

Increase the value of the vm_page_free_optimal only by two pages at a time. Do not specify a value that is more than the value of the vm subsystem attribute vm_page_free_target.

6.5.3    Enabling Aggressive Task Swapping

Swapping begins when the free page list falls below the swapping threshold, as specified by the vm subsystem vm_page_free_swap attribute. Excessive swapping is not desired. See Section 6.1.4 for more information on swapping. You can specify whether or not idle tasks are aggressively swapped out.

Related Attribute

The vm subsystem vm_aggressive_swap attribute specifies whether or not the task swapper aggressively swaps out idle tasks.

Value: 1 or 0

Default value: 0 (disabled)

When to Tune

Aggressive task swapping improves system throughput, but it degrades the interactive response performance. Usually, you do not need to enable aggressive task swapping.

You can modify the vm_aggressive_swap attribute without rebooting. See Section 3.6 for information about modifying kernel attributes.

6.5.4    Limiting the Resident Set Size to Avoid Swapping

By default, Tru64 UNIX does not limit the resident set size for a process. Applications can set a process-specific limit on the number of pages resident in memory by specifying the RLIMIT_RSS resource value in a setrlimit() call. However, applications are not required to limit the resident set size of a process and there is no system-wide default limit. Therefore, the resident set size for a process is limited only by system memory restrictions. If the demand for memory exceeds the number of free pages, processes with large resident set sizes are likely candidates for swapping. See Section 6.1.4 for more information on swapping.

To avoid swapping a process because it has a large resident set size, you can specify process-specific and system wide limits for resident set sizes.

Related Attribute

The following list describes the vm subsystem attributes that relate to limiting the resident set size:

When to Tune

You do not need to limit resident set sizes if the system is not paging.

If you limit the resident set size, either for a specific process or system wide, you must also use the vm subsystem attribute anon_rss_enforce to set either a soft or hard limit on the size of a resident set.

If you enable a hard limit, a task's resident set cannot exceed the limit. If a task reaches the hard limit, pages of the task's anonymous memory are moved to swap space to keep the resident set size within the limit.

If you enable a soft limit, anonymous memory paging will start when the following conditions are met:

You cannot modify the anon_rss_enforce attribute without rebooting the system. You can modify the vm_page_free_optimal, vm_rss_maxpercent, vm_rss_block_target, and vm_rss_wakeup_target attributes without rebooting the system.

6.5.5    Managing Modified Page Prewriting

The vm subsystem attempts to prevent a memory shortage by prewriting modified (dirty) pages to disk. To reclaim a page that was prewritten, the virtual memory subsystem only needs to validate the page, which can improve performance. When the virtual memory subsystem anticipates that the pages on the free list will soon be depleted, it prewrites to disk the oldest inactive and UBC LRU pages. See Section 6.1.4.1 for more information about prewriting. You can tune attributes that relate to prewriting.

Related Attribute

The following list describes the vm subsystem attributes that relate to modified page prewriting:

You can modify the vm_page_prewrite_target or vm_ubcdirtypercent attribute without rebooting the system.

When to Tune

You do not need to modify the value of the vm_page_prewrite_target attribute if the system is not paging.

Decreasing the value of the vm_page_prewrite_target attribute will improve peak workload performance, but it will cause a drastic performance degradation when memory is exhausted.

Increasing the value of the vm_page_prewrite_target attribute will:

Increase the value of the vm_page_prewrite_target attribute by increments of 64 pages.

To increase the rate of UBC LRU dirty page prewriting, decrease the value of the vm_ubcdirtypercent attribute by increments of 1 percent.

See Section 3.6 for information about modifying kernel attributes.

6.5.6    Managing Page-In and Page-Out Clusters Sizes

The virtual memory subsystem reads in and writes out additional pages to the swap device in an attempt to anticipate pages that it will need. You can specify the number of additional pages to the swap device.

Related Attributes

The following list describes the vm subsystem attributes that relate to reading and writing pages:

You cannot modify the vm_max_rdpgio_kluster and vm_max_wrpgio_kluster attributes without rebooting the system. See Section 3.6 for information about modifying kernel subsystem attributes.

When to Tune

You might want to increase the value of the vm_max_rdpgio_kluster attribute if you have a large-memory system and you are swapping processes. Increasing the value increases the peak workload performance because more pages will be in memory and the system will spend less time page faulting, but will consume more memory and decrease system performance.

You may want to increase the value of the vm_max_wrpgio_kluster attribute if you are paging and swapping processes. Increasing the value improves the peak workload performance and conserves memory, but might cause more page ins and decrease the total system workload performance.

6.5.7    Managing I/O Requests on the Swap Partition

Swapping begins when the free page list falls below the swapping threshold. Excessive swapping is not desired. See Section 6.1.4 for more information on swapping. You can specify the number of outstanding synchronous and asynchronous I/O requests that can be on swap partitions at one time.

Synchronous swap requests are used for pagein operations and task swapping. Asynchronous swap requests are used for pageout operations and for prewriting modified pages.

Related Attribute

The following list describes the vm subsystem attributes that relate to requests in swap partitions:

When to Tune

The value of the vm_syncswapbuffers attribute should be equal to the approximate number of simultaneously running processes that the system can easily support. Increasing the value increases overall system throughput, but it consumes memory.

The value of the vm_asyncswapbuffers attribute should be equal to the approximate number of number of I/O transfers that a swap device can support at one time. If you are using LSM, you might want to increase the value of the vm_asyncswapbuffers attribute, which causes page-in requests to lag asynchronous page-out requests. Decreasing the value will use more memory, but it will improve the interactive response time.

You can modify the vm_syncswapbuffers attribute and the vm_asyncswapbuffers attribute without rebooting the system. See Section 3.6 for information about modifying kernel subsystem attributes.

6.6    Reserving Physical Memory for Shared Memory

Granularity hints allow you to reserve a portion of dynamically wired physical memory at boot time for shared memory. This functionality allows the translation lookaside buffer to map more than a single page, and enables shared page table entry functionality, which may result in more cache hits.

On some database servers, using granularity hints provides a 2 to 4 percent run-time performance gain that reduces the shared memory detach time. See your database application documentation to determine if you should use granularity hints.

For most applications, use the Segmented Shared Memory (SSM) functionality (the default) instead of granularity hints.

To enable granularity hints, you must specify a value for the vm subsystem attribute gh_chunks. In addition, to make granularity hints more effective, modify applications to ensure that both the shared memory segment starting address and size are aligned on an 8-MB boundary.

Section 6.6.1 and Section 6.6.2 describe how to enable granularity hints.

6.6.1    Tuning the Kernel to Use Granularity Hints

To use granularity hints, you must specify the number of 4-MB chunks of physical memory to reserve for shared memory at boot time. This memory cannot be used for any other purpose and cannot be returned to the system or reclaimed.

To reserve memory for shared memory, specify a nonzero value for the gh_chunks attribute. For example, if you want to reserve 4 GB of memory, specify 1024 for the value of gh_chunks (1024 * 4 MB = 4 GB). If you specify a value of 512, you will reserve 2 GB of memory.

The value you specify for the gh_chunks attribute depends on your database application. Do not reserve an excessive amount of memory, because this decreases the memory available to processes and the UBC.

Note

If you enable granularity hints, disable the use of segmented shared memory by setting the value of the ipc subsystem attribute ssm_threshold attribute to zero.

You can determine if you have reserved the appropriate amount of memory. For example, you can initially specify 512 for the value of the gh_chunks attribute. Then, enter the following dbx commands while running the application that allocates shared memory:

# /usr/ucb/dbx -k /vmunix /dev/mem
 
(dbx) px &gh_free_counts
0xfffffc0000681748
(dbx) 0xfffffc0000681748/4X
fffffc0000681748:  0000000000000402 0000000000000004
fffffc0000681758:  0000000000000000 0000000000000002
(dbx)

The previous example shows:

To save memory, you can reduce the value of the gh_chunks attribute until only one or two 512-page chunks are free while the application that uses shared memory is running.

The following vm subsystem attributes also affect granularity hints:

In addition, messages will display on the system console indicating unaligned size and attach address requests. The unaligned attach messages are limited to one per shared memory segment.

See Section 3.6 for information about modifying kernel subsystem attributes.

6.6.2    Modifying Applications to Use Granularity Hints

You can make granularity hints more effective by making both the shared memory segment starting address and size aligned on an 8-MB boundary.

To share third-level page table entries, the shared memory segment attach address (specified by the shmat function) and the shared memory segment size (specified by the shmget function) must be aligned on an 8-MB boundary. This means that the lowest 23 bits of both the address and the size must be zero.

The attach address and the shared memory segment size is specified by the application. In addition, System V shared memory semantics allow a maximum shared memory segment size of 2 GB minus 1 byte. Applications that need shared memory segments larger than 2 GB can construct these regions by using multiple segments. In this case, the total shared memory size specified by the user to the application must be 8-MB aligned. In addition, the value of the shm_max attribute, which specifies the maximum size of a System V shared memory segment, must be 8-MB aligned.

If the total shared memory size specified to the application is greater than 2 GB, you can specify a value of 2139095040 (or 0x7f800000) for the value of the shm_max attribute. This is the maximum value (2 GB minus 8 MB) that you can specify for the shm_max attribute and still share page table entries.

Use the following dbx command sequence to determine if page table entries are being shared:

# /usr/ucb/dbx -k /vmunix /dev/mem
 
(dbx) p *(vm_granhint_stats *)&gh_stats_store
	struct {
	    total_mappers = 21
	    shared_mappers = 21
	    unshared_mappers = 0
	    total_unmappers = 21
	    shared_unmappers = 21
	    unshared_unmappers = 0
	    unaligned_mappers = 0
	    access_violations = 0
	    unaligned_size_requests = 0
	    unaligned_attachers = 0
	    wired_bypass = 0
	    wired_returns = 0
	} 
	(dbx)

For the best performance, the shared_mappers kernel variable should be equal to the number of shared memory segments, and the unshared_mappers, unaligned_attachers, and unaligned_size_requests variables should be zero.

Because of how shared memory is divided into shared memory segments, there may be some unshared segments. This occurs when the starting address or the size is aligned on an 8-MB boundary. This condition may be unavoidable in some cases. In many cases, the value of total_unmappers will be greater than the value of total_mappers.

Shared memory locking changes a lock that was a single lock into a hashed array of locks. The size of the hashed array of locks can be modified by modifying the value of the vm subsystem attribute vm_page_lock_count. The default value is zero.