DECdfs for OpenVMS Management Guide


Previous Contents Index


Appendix C
Adjusting DECnet and Client RMS Parameters to Enhance Performance

Compaq designed DECdfs software to provide excellent performance using the default DECnet parameters. For this reason, many DECdfs users do not need to change any DECnet parameters.

In some configurations, however, you can significantly improve performance by adjusting a few DECnet parameters (called tuning). For example, a programming environment in which each individual client user opens many files simultaneously could benefit from tuning. Such an environment uses more network resources than one in which each client user opens only one file at a time. The more network resources your configuration uses, the more likely it is to benefit from tuning. Another DECdfs environment in which tuning can improve performance is a server or client system that supports many DECdfs users. This appendix describes the DECnet parameters you can adjust to tune your DECdfs configuration to suit your needs. For detailed information on DECnet parameters, see the DECnet Phase IV or DECnet Phase V documentation set, depending on the version of DECnet you are using.

C.1 Setting DECnet Network Parameters

To obtain the greatest benefit, adjust parameters that affect many users. Tune the server first and then clients with the highest number of DECdfs users. You can change a DECnet parameter both temporarily and permanently. It is useful to change it temporarily in order to evaluate the effect of the change. When you are satisfied with the change, you can make it permanent.

DECnet Phase IV:

Use the Network Control Program (NCP) SET command to modify DECnet network parameters temporarily. The SET command affects the volatile database. Parameters changed with SET take effect immediately but are lost when the system shuts down. The DEFINE command affects the permanent database. Parameters set with DEFINE do not take place until the system reboots but are permanent thereafter unless you change them. For more information about NCP commands, see DECnet for OpenVMS Network Management Utilities.

DECnet Phase V:

To change a parameter so that the new value takes effect immediately, enter the appropriate command at the prompt NCL>. Changes made by this method take effect immediately but are lost when the system shuts down. This method is useful in testing the immediate effect of various parameter settings.

To permanently change a DECnet Phase V parameter, edit the applicable NCL script file. The names of NCL script files have the following format: SYS$MANAGER:NET$entity-module_STARTUP.NCL. Changes entered in the NCL script file do not take effect until the system reboots but are permanent thereafter unless you change them. Use this method when you want to preserve your changes. See the DECnet-Plus for OpenVMS Network Management, DECnet/OSI Network Management, DECnet-Plus Network Control Language Reference, and the DECnet/OSI Network Control Language Reference manuals for more information about setting DECnet Phase V parameters.

The same procedure for setting network parameters applies to DECdfs servers and clients. The following sections describe how to adjust network parameters that affect the performance of DECdfs.

C.1.1 Line Receive Buffers/Station Buffers

Line receive buffers (called station buffers in DECnet Phase V) enable DECdfs to receive information from the network. DECdfs operates efficiently when enough buffers are available to accept incoming data. If the number of buffers available is not sufficient, incoming data is lost and the network must retransmit it, thus degrading performance. DECnet counts the number of times the network attempts to transmit information and finds that a buffer is unavailable. You can display the total as follows:

DECnet Phase IV:


NCP> SHOW LINE line-id COUNTERS

The number of times a buffer was unavailable is shown at the end of the display as User buffer unavailable.

DECnet Phase V:


NCL> SHOW [NODE node-id] CSMA-CD STATION station-name ALL COUNTERS

Replace node-id with the name or address of the node. The number of times a buffer was unavailable is shown at the end of the display as Station buffer unavailable. (To show the name of the station, use the command SHOW CSMA-CD STATION * ALL COUNTERS.)

You can increase the number of buffers, as follows:

DECnet Phase IV:


NCP> SET LINE line-id RECEIVE BUFFERS integer

Replace integer with a value from 1 to 32. The default value is 4.

The following example uses the NCP SET and DEFINE commands to set the number of receive buffers for the line BNA-0 to 26.


NCP> SET LINE bna-0 RECEIVE BUFFERS 26
NCP> DEFINE LINE bna-0 RECEIVE BUFFERS 26

DECnet Phase V:


NCL> SET NODE 0 CSMA-CD STATION station-name STATION BUFFERS integer

Replace integer with a value between 1 and 64. The default is 4.

The following example sets the number of station buffers for station SVA-0 to 23.


NCL> DISABLE NODE 0 CSMA-CD STATION sva-0
NCL> SET NODE 0 CSMA-CD STATION sva-0 STATION BUFFERS  23
NCL> ENABLE NODE 0 CSMA-CD STATION sva-0

To make your change permanent, edit the file SYS$MANAGER:NET$CSMACD_STARTUP.NCL. Edit the line with the following format to specify the number of station buffers:


NCL> SET NODE 0 CSMA-CD STATION station-name STATION BUFFERS integer

C.1.2 Pipeline Quota (DECnet Phase IV Only)

The NCP PIPELINE QUOTA parameter specifies the number of bytes of nonpaged pool each DECnet logical link has available for buffering data between DECnet and DECdfs. DECdfs uses a single DECnet logical link between a client and server node. If a node has many concurrent users, this logical link may need more nonpaged pool than the default of 3000 bytes.

If both a client-server and server-client relationship exist between two nodes, one DECnet logical link exists for each of the two relationships. Hence, the pipeline quota you set must support the larger of two numbers representing:

To set the PIPELINE QUOTA parameter, use the following command:


NCP> SET EXECUTOR PIPELINE QUOTA quota

For optimal system performance with moderate to heavy DECdfs workloads, replace quota with 32767. If many DECdfs users on one client access a server, replace quota with its maximum value of 65535.

C.1.3 Maximum Window (DECnet Phase V Only)

The MAXIMUM WINDOW parameter replaces the DECnet PIPELINE QUOTA parameter. MAXIMUM WINDOW is a Network Services Protocol (NSP) and Open Systems Interconnection (OSI) characteristic. It controls the number of data segments allowed to be transmitted over a transport connection before at least one acknowledgment must be returned from the destination system, such as DECdfs. If the number of data segments transmitted equals the MAXIMUM WINDOW value and no acknowledgments have been received, the transport stops sending data segments and waits for an acknowledgment message. For further information on MAXIMUM WINDOW, see the DECnet Phase V documentation set.

To determine the value set for MAXIMUM WINDOW on your system, use the following command:


NCL> SHOW NSP ALL

To set the MAXIMUM WINDOW parameter on an NSP transport, use the following commands:


NCL> DISABLE [NODE node-id] NSP
NCL> SET [NODE node-id] NSP MAXIMUM WINDOW = integer
NCL> ENABLE [NODE node-id] NSP

Replace node-id with the name or address of the node. Replace integer with a value between 1 to 2047. The default value is 32. Compaq recommends a value of 60 for configurations with an average number of users, and up to 120 to 150 for configurations with a large number of users.

To make your change permanent, edit the file named in the following format: SYS$MANAGER:NET$transport-name_STARTUP.NCL. Transport-name can be either NSP or OSI. DECnet nodes use NSP, but both NSP and OSI reside on DECnet Phase V nodes. Edit the line with the following format to specify the value for integer.


SET NODE 0 NSP MAXIMUM WINDOW = integer 

C.1.4 Maximum Links/Transport Connections

The NCP MAXIMUM LINKS and NCL MAXIMUM TRANSPORT CONNECTIONS parameters specify how many connections a node can maintain with other nodes.

DECnet Phase IV:

MAXIMUM LINKS determines how many DECdfs connections a server accepts from DECdfs clients. Each communication connection between a client and a server requires a single DECnet logical link (transport connection). The DECdfs Communication Entity creates one connection for all communication between a server and a particular client. This single connection provides DECdfs service to any number of users at the client. The users can mount any number of access points on the server and open any number of files.

To specify how many transport connections your system allows, use the following command:


NCP> SET EXECUTOR MAXIMUM LINKS integer

The maximum value for integer is 960. This value is reduced to 512, however, if the ALIAS MAXIMUM LINKS parameter is also specified. The default value is 32. A workable range for many networks is 25 to 50.

The maximum should be high enough to accommodate both DECdfs and all other network users. You may need to raise this parameter on servers with incoming connections from many different clients and on clients with outgoing connections to many different servers.

The following example sets the MAXIMUM LINKS to 40:


NCP> SET EXECUTOR MAXIMUM LINKS 40
NCP> DEFINE EXECUTOR MAXIMUM LINKS 40

DECnet Phase V:

MAXIMUM TRANSPORT CONNECTIONS determines how many DECdfs connections a server accepts from DECdfs clients. Each communication connection between a client and a server requires a single DECnet logical link (transport connection). To determine the value set for MAXIMUM TRANSPORT CONNECTIONS, use the following command:


NCL> SHOW NSP ALL

To modify the maximum transport connections parameter, disable the transport, set the parameter for the transport, and reenable the transport, use the following commands:


NCL> DISABLE NODE [node-id] NSP
NCL> SET NODE [node-id] NSP MAXIMUM TRANSPORT CONNECTIONS integer
NCL> ENABLE NODE [node-id] NSP

Replace node-id with the name or address of the node. Replace integer with a value between 0 and 65535. The value must be less than the current value of MAXIMUM REMOTE NSAPS. For further information on MAXIMUM REMOTE NSAPS, see the DECnet/OSI Network Control Language Reference manual or the DECnet-Plus Network Control Language Reference manual.

The following example sets the maximum transport connections parameter for an NSP protocol to 1001.


NCL> DISABLE NODE 0 NSP
NCL> SET NODE 0 NSP MAXIMUM TRANSPORT CONNECTIONS 1001
NCL> ENABLE NODE 0 NSP

To make your change permanent, edit the script file SYS$MANAGER:NET$transport-name_STARTUP.NCL. The transport name can be either NSP or OSI. Edit the line with the following format to specify the value for the maximum transport connections parameter:


SET NODE 0 NSP MAXIMUM TRANSPORT CONNECTIONS  integer 

C.2 Setting Client RMS Default Parameters

If you use the file processing and management functions of VAX Record Management Services (RMS), you may need to adjust the RMS defaults. Note that RMS buffering occurs on the DECdfs client.

Section C.2.1 describes how to set RMS parameters for sequential file access. Section C.2.2 suggests an RMS default for indexed sequential files or relative files that are heavily accessed. For more information about the SET RMS_DEFAULT command, see the OpenVMS DCL Dictionary. For more information about optimizing access to RMS files, see Guide to OpenVMS File Applications.

C.2.1 Sequential File Access

To make the best use of DECdfs's quick file access, most applications benefit from default RMS multibuffer and multiblock values of 3 and 16, respectively, when accessing sequential files.

Set the number of buffers to 3 for the most efficient multibuffering of file operations. Use the following DCL command:


$ SET RMS_DEFAULT/BUFFER_COUNT=3 /DISK

Next, set the size of each buffer to sixteen 512-byte blocks:


$ SET RMS_DEFAULT/BLOCK_COUNT=16

To set these values for just your user process, you can include the commands in your LOGIN.COM file. To set them on a systemwide basis, you can add the /SYSTEM qualifier and include the commands in the DFS$SYSTARTUP file.

RMS multibuffer and multiblock values that are larger than the default values can slow performance by allowing the application to exceed the DECnet pipeline quota. However, these values are recommendations that may not be optimal for every application. If your application opens many files or if it has a small working set size, you may find these values are too large.

Note

If you prefer, you can set the RMS default multibuffer value by using the SYSGEN parameter RMS_DFMBF. You can set the RMS default multiblock value by using the SYSGEN parameter RMS_DFMBC.

C.2.2 Indexed Sequential File or Relative File Access

If you have indexed sequential files or relative files that are heavily accessed, you may set appropriate RMS defaults by using the /INDEXED or /RELATIVE qualifiers to the SET RMS_DEFAULT command.

This manual cannot recommend specific values for /INDEXED or /RELATIVE qualifiers to use with DECdfs because these values depend on file characteristics and file access patterns that can vary widely. For information about determining appropriate values for the /INDEXED or /RELATIVE qualifiers, see the Guide to OpenVMS File Applications.

Do not use the /INDEXED OR /RELATIVE qualifier if typical file access patterns from the client involve only a few record operations each time an indexed sequential or relative file is opened.

If several processes share read access to a DECdfs-served file, try using global buffering for that file. For more information about global buffering, see the Guide to OpenVMS File Applications.

Note

If you prefer, you can set the RMS default multibuffer count for indexed sequential files value by using the SYSGEN parameter RMS_DFIDX. You can set the RMS default multibuffer count for relative files value by using the SYSGEN parameter RMS_DFREL.


Previous Next Contents Index