DECdfs for OpenVMS Management Guide


Previous Contents Index


Chapter 2
Managing a DECdfs Server

Managing a DECdfs for OpenVMS server involves first preparing the system for use by DECdfs and then using DFS$CONTROL commands to create one or more access points and make them available. If you choose, you can also use DFS$CONTROL commands to tailor the operation of the server and the Communication Entity to enhance performance.

This chapter describes the following management tasks:

Most of these tasks involve the use of DFS$CONTROL commands and qualifiers. For complete information on a command, see Chapter 4.

After you read this chapter, set the necessary system and network parameters and edit the DFS$CONFIG.COM and DFS$SYSTARTUP.COM files. You can then start DECdfs on your system by executing the SYS$STARTUP:DFS$STARTUP.COM file.

2.1 Setting System Parameters

Running DECdfs on an OpenVMS system requires that you adjust certain system generation (SYSGEN) parameters. Before installation, change the CHANNELCNT, NPAGEDYN, GBLPAGES, GLBSECTIONS, and INTSTKPAGES (VAX only) parameters as directed in the DECdfs for OpenVMS Installation Guide. On OpenVMS VAX systems, increasing the INTSTKPAGES parameter is especially important. If the number of interrupt stack pages is not large enough, an interrupt stack overflow can cause your system to halt.

Sections 2.1.1, 2.1.2, and 2.1.3 describe DECdfs Communication Entity and server parameters that work with each other and with system and network parameters. These sections describe the parameters that limit the number of open files and the amount of DECdfs activity.

The parameters work together in a layered manner; that is, you can set parameters at the system level, network level, or application DECdfs level. Setting a low value at any one of those levels affects the server's operation, even if you set higher values at the other levels. For example, if you specify that the DECnet network should establish very few logical links to and from your system, the low number of links prevents DECdfs from establishing a high number of connections.

For information about limiting logical links at the network level, see Appendix C.

2.1.1 Limiting the Number of Open Files

Your system's channel count parameter, CHANNELCNT, specifies the maximum number of files that any process on the system can open concurrently. Each file requires one channel, and the DECdfs server process opens all local files that users at DECdfs clients access. If the server is your system's most active file user, you may need to increase the channel count to accommodate the server.

Determine the appropriate CHANNELCNT parameter by estimating the maximum number of simultaneously open files you expect on the server. Add 15 to this number to allow for some additional channels for the server's own use. For example, if you expect 250 files to be open simultaneously, set the CHANNELCNT parameter to 265 channels before running DECdfs. To show the current value for the CHANNELCNT parameter, invoke SYSGEN as follows:


$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> USE CURRENT
SYSGEN> SHOW CHANNELCNT

SYSGEN displays the settings for CHANNELCNT under the Current heading, as follows:


Parameter Name  Current  Default  Minimum   Maximum  Units  Dynamic
--------------  -------  -------  -------   -------  -----  -------
CHANNELCNT          202      127       31      2047  Channels  

Insert the following line in the MODPARAMS.DAT file in the SYS$SYSTEM directory, and then run the AUTOGEN procedure:


MIN_CHANNELCNT  = 265 

For information on AUTOGEN, see the OpenVMS System Management Utilities Reference Manual. You can read the online help information about the CHANNELCNT parameter by entering the following SYSGEN HELP command:


$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> HELP PARAMETERS SPECIAL_PARAMS CHANNELCNT

2.1.2 Controlling DECdfs Activity

You can control DECdfs activity by specifying the number of outstanding Communication Entity requests allowed by DECdfs. The Communication Entity allows you to specify the number of file I/O requests from clients that can be outstanding at the server simultaneously. To specify this value, enter the DFS$CONTROL command:


DFS> SET COMMUNICATION/REQUESTS_OUTSTANDING_MAXIMUM=value

If the number of requests arriving from client systems exceeds the Communication Entity's permitted number of outstanding requests, the Communication Entity stops accepting data from DECnet. The DECnet network layer buffers the requests until the requests reach the value specified by one of these parameters:

DECnet Phase IV: PIPELINE QUOTA parameter

DECnet Phase V: MAXIMUM WINDOW parameter

For more information on these parameters, see Appendix C.

When the limit is reached, DECnet's flow control mechanism stops the client from sending data and returns an error message.

2.1.3 Limiting Inactive DECdfs DECnet Links

The DECdfs Communication Entity monitors the DECnet links, using the time interval specified by the SET COMMUNICATION/SCAN_TIME command. If the Communication Entity finds that a link is inactive on two successive scans, it disconnects the link. The link is reestablished when a user on that client next requests a file operation. The Communication Entity maintains the DECdfs connection even after it times out a link.

2.2 Setting Up Proxy Accounts

Client users must have OpenVMS proxy accounts in order to access the server. You use the Authorize Utility (AUTHORIZE) to create these accounts. The Authorize Utility modifies the network user authorization file, NETPROXY.DAT, so that users at DECdfs clients get the necessary rights and privileges at the server. For information on AUTHORIZE commands, see the OpenVMS System Management Utilities Reference Manual.

Each remote user can be granted DECnet proxy access to multiple accounts. One of the accounts can be the default proxy account for that user. The DECdfs server recognizes only default proxy accounts.

The following example shows how you use AUTHORIZE to grant proxy access. This example gives user CHRIS on node EGRET access to the existing local account STAFF on the server.


$ SET DEFAULT SYS$SYSTEM
$ RUN AUTHORIZE
UAF> ADD/PROXY EGRET::CHRIS STAFF /DEFAULT
UAF> EXIT

To give users access to the DECdfs server without giving them explicit proxy accounts, create a default DECdfs account (DFS$DEFAULT).

Example 2-1 shows how to set up a default DECdfs account or proxy account that cannot be used for any other purpose except DECdfs access. If your system has a default DECnet account, you can choose the same UIC or the same group code for your DECdfs default account. Using UIC of the DECnet default account allows the DECdfs default account to access those files and directories on the system that are accessible by the DECnet default account. Otherwise, choose a UIC or group code that is different from all other accounts on the system.

Example 2-1 Creating a DFS$DEFAULT Account

$ SET DEFAULT SYS$SYSTEM
$ RUN AUTHORIZE
UAF> ADD DFS$DEFAULT    -
 /NOACCESS=(PRIMARY, SECONDARY) -
 /ASTLM=0        -
 /BIOLM=0        -
 /BYTLM=0        -
 /CLI=no_such_cli        -
 /CLITABLES=no_such_tbl  -
 /CPUTIME=::.01  -
 /DEFPRIVILEGES=NOALL    -
 /DEVICE=NLA0:   -
 /DIOLM=0        -
 /DIRECTORY=[no_such_directory]  -
 /ENQLM=0        -
 /FILLM=0        -
 
 /FLAGS=(CAPTIVE, DEFCLI, DISCTLY, DISMAIL, DISNEWMAIL, DISRECONNECT, -
 
 DISWELCOME, LOCKPWD, PWD_EXPIRED, PWD2_EXPIRED, RESTRICTED) -
 /GENERATE_PASSWORD=BOTH -
 /JTQUOTA=0      -
 /LGICMD=no_such_file    -
 /OWNER="Distributed File Service"      -
 /PGFLQUOTA=0    -
 /PRCLM=0        -
 /PRIORITY=0     -
 /PRIVILEGES=NOALL       -
 /PWDEXPIRED     -
 /PWDLIFETIME=::.01      -
 /PWDMINIMUM=31  -
 /TQELM=0        -
 /UIC=[ggg,mmm]  -
 /WSDEFAULT=0    -
 /WSEXTENT=0     -
 /WSQUOTA=0
UAF> EXIT
$ 

The example illustrates creating a well-protected default DECdfs account that is fully usable by DECdfs. See the OpenVMS Guide to System Security for information on default network accounts. Use care in setting up the account to ensure that DECdfs users have the rights and privileges necessary to access the files they need. If you create a DFS$DEFAULT account, all users without explicit proxy accounts have the rights, privileges, and identity of DFS$DEFAULT.

The DFS$DEFAULT account in Example 2-1 can also serve as a model for an individual proxy account that gives DECdfs users access to the server while preventing other types of access. For detailed information about creating proxy accounts, see the OpenVMS Guide to System Security, the DECnet for OpenVMS Network Management Utilities manual, and the DECnet-Plus for OpenVMS Network Management manual.

2.2.1 Setting Up Privileges

The privileges that affect file-access checking (BYPASS, GRPPRV, READALL, and SYSPRV) also control DECdfs access to files.

If the proxy account or DFS$DEFAULT account has any of these privileges as default privileges, the DECdfs server uses them to allow access to files.

If the proxy account or DFS$DEFAULT account has any of these privileges as authorized privileges, the DECdfs server uses them whenever it detects that the client process has these privileges enabled.

Note

Dynamic enabling and disabling of privileges differs from ordinary DECnet file-access checking, which can use only the default privileges of the proxy or default account.

Allowing SETPRV as an authorized privilege for a DECdfs proxy account or the DFS$DEFAULT account is the same as allowing all privileges as authorized privileges.

2.2.2 Setting Up UICs, ACLs, and User Names

In some circumstances, the difference between the server environment and the client environment can become obvious to users. This section explains how user identification codes (UICs), access control lists (ACLs), and user names can cause operational discrepancies between the server and client.

2.2.2.1 User Identification Codes

The OpenVMS system on the server interprets a file's user identification code (UIC) according to its rights database (RIGHTSLIST.DAT). The OpenVMS system stores a file owner's UIC as a binary value, which it translates to ASCII according to the rights database when displaying the UIC to a user. When a user at a DECdfs client requests the UIC of a file, the server system passes the binary value to the client system.

If the file UIC and proxy account UIC are the same, DECdfs converts the file UIC to the client account UIC. Otherwise, when the client system translates the binary UIC according to the client system's rights database, the translation might seem incorrect to users at the client system.

Users might have difficulty performing some directory or backup operations on files or directories that are not owned by this particular proxy account on the server. You can eliminate these problems by creating proxy account UICs to match the client UICs. If that is not possible, inform the client system manager or users that UIC discrepancies affect the following DCL commands:

Note

Client users can avoid problems with the BACKUP command by using the /BY_OWNER=PARENT or /BY_OWNER=ORIGINAL qualifier as described in Section 3.4.2.

For more information about UICs, see Section 3.4.2.

2.2.2.2 Access Control Lists

The OpenVMS system on the server also interprets a file's access control lists (ACLs) according to its rights database. It propagates default access control entries (ACEs) for DECdfs users' files from the directory in which it creates those files. The OpenVMS system enforces ACEs on files at the server; you can log in to the server and set ACEs that control DECdfs access to files. However, users cannot set or display ACLs from a DECdfs client. For more information on ACLs and ACEs, see Section 2.5.

2.2.2.3 User Names

With applications that require user names, a discrepancy can occur if a user has different user names on the client and the server. If the user sometimes accesses the application from a DECdfs client and, at other times, locally from the server, certain operations of the application can fail.

For example, DEC Code Management System (CMS) reserves and replaces software components according to user name. When a user reserves and removes a component, CMS stores that person's user name in its library data file. When the user attempts to replace the component, CMS compares the current user name with the stored name. If the names do not match, the user cannot replace the component. Suppose the CMS libraries are on a server, and a user reserves a library component when running CMS at a client. If the user later logs in to the server and tries to replace the component, CMS rejects the replacement operation unless the user names match.

2.2.3 Giving Cluster Clients Access to Server Files

If the client node is a cluster system, enable the cluster alias outgoing on the client node (see Section 3.8) and add a proxy on the server from the cluster's user to the local user account. This allows users to access DECdfs files regardless of which cluster member they log in to.

To add this proxy, use the following command format at the Authorize Utility's UAF> prompt:


UAF> ADD/PROXY client-cluster-name::remote-user user-name /DEFAULT

If the client node is a cluster system and the cluster alias outgoing is not enabled, you need to add a proxy on the server from each node in the client cluster to the local user account. This allows users to access DECdfs files regardless of which cluster member they log in to. The following example adds proxies for three nodes residing in a cluster in which the cluster alias is not enabled:


UAF> ADD/PROXY NODE_A::B_WILLIAMS B_WILLIAMS /DEFAULT
UAF> ADD/PROXY NODE_B::B_WILLIAMS B_WILLIAMS /DEFAULT
UAF> ADD/PROXY NODE_C::B_WILLIAMS B_WILLIAMS /DEFAULT

2.2.4 Allowing Client Users to Print Server Files

To allow client users to print files from your server, you must create special proxy accounts. The OpenVMS print symbiont runs under the SYSTEM account. The client SYSTEM account therefore needs proxy access to your server in order to print files for users.

Giving another node's SYSTEM account proxy access to your node is an issue to resolve according to the security needs at your site.

If the client node is a single-user workstation, you could grant its SYSTEM account access to its user's proxy account on the server. To do so, use the following command format at the Authorize Utility's UAF> prompt:


UAF> ADD/PROXY client-node-name::SYSTEM user-name /DEFAULT

For example, if Julie's workstation is EAGLE, you can enable her to print DECdfs files by giving the SYSTEM account on EAGLE access to the JULIE proxy account on your server:


UAF> ADD/PROXY EAGLE::SYSTEM JULIE

If the client node is a time-sharing system with more than one user, however, granting its SYSTEM account access to a nondefault proxy account can pose security risks for files served by DECdfs. Instead, do the following:

  1. Use the Authorize Utility to create a special proxy account for client printing. You can name this account DFS$PRINT.
  2. Set up the account to resemble the DFS$DEFAULT account shown in Example 2-1, but replace the /DEFPRIV=NOALL qualifier with /DEFPRIV=READALL and use a different password for the /PASSWORD qualifier.
  3. After creating the DFS$PRINT account, give the client time-sharing node's SYSTEM account proxy access to it.

However, this method might have a security weakness because it lets the system account at the client read any DECdfs-served file on the server.

Another method for allowing client users to print files on the server is to permit the client SYSTEM account to access DFS$DEFAULT. This method is more secure than creating a DFS$PRINT account, but it limits users on the client to the following operations:

Note

If the client is a time-sharing system or a cluster, see Section 3.6 for information about using the /DEVICE qualifier with the DFS$CONTROL command MOUNT.

A method that Compaq does not recommend, but that you may choose to implement under certain circumstances, is to give the client node's SYSTEM account access to the SYSTEM account on your node. You might do so, for example, if you are the system manager of both the client and the server. To choose this option, use the following command at the UAF> prompt:


UAF> ADD/PROXY client-node-name::SYSTEM SYSTEM /DEFAULT

Warning

In a large network, using a wildcard to give multiple SYSTEM accounts (*::SYSTEM) access to any nondefault account on your system can be a serious breach of your system's security. This is especially true of giving such access to your SYSTEM account.

2.3 Creating and Managing Access Points

An access point consists of the file resources that a DECdfs server provides to one or more users of a DECdfs client. See Section 1.1.2 for more information about access points. This section discusses the following access point management tasks:

2.3.1 Deciding Where to Place Access Points

Each time you add an access point on a DECdfs server, you specify a device and directory to which the access point name refers. The DFS$CONTROL command ADD ACCESS_POINT requires a device name and gives you the option of supplying a directory. The default directory is the master file directory for the device ([000000]), but you can place the access point lower in the directory tree. This placement affects the user's perception of the directory structure.

If you place the access point at the device's actual master file directory, end users can access files in the disk's directories as they normally would. Figure 2-1 illustrates this placement, with the access point at the master file directory. The user enters a command that accesses one of the first subdirectories.

Figure 2-1 Access Point at the Master File Directory


If you place the access point at a subdirectory of the master file directory, that subdirectory appears on the client device as a master file directory. To perform file operations in that directory, end users would have to specify the directory as [000000] in their file specifications. Figure 2-2 illustrates this access point placement.

Figure 2-2 Access Point at a Subdirectory


The figure shows that [000000] is the actual master file directory for the disk, as viewed from the server. The user command, however, uses [000000] to represent the master file directory for the client device, which is the server directory at which you placed the access point.

The user at a DECdfs client can create subdirectories to the usual OpenVMS depth limit of 8, starting with the master file directory of the client device. If the master file directory on the client device is a subdirectory at the server, the user can create subdirectories that are hidden from OpenVMS at the server. These DECdfs subdirectories can nest as many as eight additional directories at the server. Backing up the server disk includes these DECdfs subdirectories only if you use the /IMAGE or /PHYSICAL qualifier to the BACKUP command. This is similar to what happens when you create rooted-device logical names in OpenVMS (see the Guide to OpenVMS File Applications).

2.3.2 Adding Access Points

To add an access point, you use the DFS$CONTROL command ADD ACCESS_POINT on the DECdfs server that contains the resource you want to make available. To make the access point available, you enter the DFS$CONTROL command MOUNT on a DECdfs client. Refer to Chapter 4 for detailed information on all DFS$CONTROL commands.

The ADD ACCESS_POINT command requires that you specify a device and optionally allows you to specify the directory to which the access point refers. When you enter the command, DECdfs adds this information to your node's server database. DECdfs also sends the access point name and your DECnet address information to the Digital Distributed Name Service (DECdns) if this service is available on your system.

Each access point name can contain from 1 to 255 characters. The name can consist of alphanumeric characters and underscores (_); a name in a hierarchical DECdns namespace can also contain period (.) characters. The dollar sign ($) is reserved for use by Compaq Computer Corporation.

It is important to discuss access point names with your DECdns manager before you attempt to create any. Each access point name in a DECdns namespace must be unique, and the names that you create must follow the conventions for your namespace. The organization of the namespace as single-directory or hierarchical also affects the types of names that you create.

A client node typically has one or more remote access points that are mounted automatically during system startup. At the conclusion of DECdfs startup, the startup procedure looks for the file SYS$STARTUP:DFS$SYSTARTUP.COM and runs it. The file typically contains a series of DFS mount commands to mount the usual access points. If you want to mount access points from clients that are not running DECdns (refer to Section 2.3.2.2), you can edit DFS$SYSTARTUP.COM to include the appropriate /NODE qualifiers.

System managers responsible for a number of clients typically maintain a master DFS$SYSTARTUP.COM file which is distributed to the clients each time it is updated.

If you add an access point interactively, it is important to edit the DFS$SYSTARTUP command file. In this way, the server automatically adds the access point the next time that the DECdfs server starts up.

Compaq recommends that you add access points that see an actual directory path, and not a directory alias. For example, on the OpenVMS system disk, the directory SYS$SYSDEVICE:[SYS0.SYSCOMMON] is an alias for the directory SYS$SYSDEVICE:[VMS$COMMON]. Compaq recommends using SYS$SYSDEVICE:[VMS$COMMON] as the access point directory. DECdfs cannot properly derive a full file specification when translating a file identification (FID) whose directory backlinks point to a directory different than the access point directory. If the access point does see a directory alias, incorrect backlink translation affects the SHOW DEVICE/FILES and SHOW QUEUE/FULL commands.

2.3.2.1 Systems with DECdns

The following list shows the steps for adding and mounting access points on systems running DECdns:

  1. The manager at DECdfs server node EIDER adds access point HELP, as follows:


    DFS> ADD ACCESS_POINT HELP DUA0:[000000]
    

    The access point refers to the master file directory ([000000]) of device DUA0:.

  2. The manager at the client then mounts access point HELP, producing a client device with the logical name HELP_LIBRARY. The response to the MOUNT command displays the client device unit number as DFSC1001:.


    DFS> MOUNT HELP HELP_LIBRARY
     
    %MOUNT-I-MOUNTED, .HELP mounted on _DFSC1001:
    

    DCL commands entered at the client, such as SET DEFAULT and DIRECTORY, operate on the DECdfs client device as on any other device.


    $ SET DEFAULT HELP_LIBRARY:[000000]
    $ DIR HELP_LIBRARY:M*.HLB
     
    Directory HELP_LIBRARY:[000000]
     
    MAILHELP.HLB;2          217  29-JUL-1998 14:39:57.50  (RWED,RWED,RWED,RE)
    MNRHELP.HLB;2            37  29-JUL-1998 14:41:36.41  (RWED,RWED,RWED,RE)
     
    Total of 2 files, 254 blocks.
    $ 
    

2.3.2.2 Systems Without DECdns

The current version of DECdfs has been modified to operate without using DECdns to accommodate OpenVMS Alpha systems running DECnet. If you have an OpenVMS Alpha system running DECnet Phase V, refer to Section 2.3.2.

A system not running DECdns, such as an Alpha server running DECnet, can be used as a DECdfs server with some limitations. You can declare access points with the DFS$CONTROL command ADD ACCESS_POINT; however, you must include the namespace name in the access point definition. For example:


DFS> ADD ACCESS_POINT DEC:.LKG.S.MYDISK DKA300:[000000]

This declaration adds the access point locally; that is, the access point is added to the DECdfs server's database but DECdfs does not add the access point to any external name server. However, in systems without DECdns, the MOUNT command in its usual form cannot determine where the specified access point is served. Therefore, the current version of DECdfs supports an additional qualifier to the MOUNT command that identifies the node which serves the access point. The new qualifier is /NODE=node_name and is shown in the following example:


DFS> MOUNT DEC:.LKG.S.MYDISK /NODE=SRVR MYDISK
 
%MOUNT-I-MOUNTED, DEC:.LKG.S.MYDISK mounted on _DFSC1001:

You must specify the fully expanded access point name in the MOUNT command. In the previous example, DEC: is the namespace name and .LKG.S.MYDISK is the access point name. The namespace name must be at the beginning and it must be followed by a colon. If it is missing, DECdfs displays the following error message:


%DFS-E-NAMSPMSNG, Namespace component of access point is missing 

If the access point is served by a cluster system, the node name to be specified depends on the cluster configuration and how the access point is added. Refer to Section 2.8 for more information. If the access point is a cluster-wide access point, then the cluster alias can be used for the node name. Otherwise, the name of a specific cluster node, which is known to be serving the access point, must be used.

When the /NODE qualifier is specified with a MOUNT command, the node name is verified before any action is taken. On a DECnet Phase IV system, an unrecognized node name will produce the following message:


%SYSTEM-F-NOSUCHNODE, remote node is unknown 

On a DECnet Phase V system, the message is:


%IPC-E-UNKNOWNENTRY, name does not exist in name space 

When the /NODE qualifier is specified, DECdns does not check or expand the access point name even if DECdns is present on the system. The /NODE qualifier must be used to mount an access point on a server that does not have DECdns even if the client does have DECdns.

As stated earlier, the access point name must include the namespace component to be recognized at the server node. If the /NODE qualifier is used and the namespace component is not specified, the logical name DFS$DEFAULT_NAMESPACE is checked for a namespace prefix to use, for example:


$ DEFINE /SYS DFS$DEFAULT_NAMESPACE  DEC:
$ DFSCP MOUNT .LKG.S.DFSDEV.VTFOLK_DKA3 /NODE=VTFOLK

In this example, DECdfs attempts to mount the access point DEC:.LKG.S.DFSDEV.VTFOLK_DKA3. If DFS$DEFAULT_NAMESPACE is not defined, the following message is displayed:


%DFS-E-NAMSPMSNG, Namespace component of access point is missing 

2.3.2.3 Using the /LOCAL Qualifier

The ADD ACCESS_POINT and REMOVE ACCESS_POINT commands include a /LOCAL qualifier, which provides functionality similar to the /NODE qualifer described in Section 2.3.2.2.

As with MOUNT/NODE, the /LOCAL qualifier prevents any use of DECdns even if it is present. This enables you to use DECdfs without setting up a DECdns namespace and name server even on systems where DECdns is available.

When you use the /LOCAL qualifier, DECdfs checks the logical name DFS$DEFAULT_NAMESPACE when an access point is specified without a namespace component. Therefore, you can include a command similar to the following in the DFS$CONFIG.COM startup file:


$ DEFINE /SYS DFS$DEFAULT_NAMESPACE DEC: 

This allows un-prefixed access point names to be used in a manner consistent with traditional use on DECdns systems. For example, the following commands are valid if DFS$DEFAULT_NAMESPACE is defined:


DFS> ADD ACCESS_POINT .LKG.S.MYDISK /LOCAL
DFS> MOUNT .LKG.S.MYDISK /NODE=VTFOLK

If you do not include the namespace name with ADD ACCESS_POINT or REMOVE ACCESS_POINT and DFS$DEFAULT_NAMESPACE is not defined, DECdfs displays the following message:


%DFS-E-NAMSPMSNG, Namespace component of access point is missing 

Refer to Chapter 4 for more information on DFSCP commands.

2.3.3 Determining Access Point Information

You can find access point information by the DFS$CONTROL command SHOW on a server node or on any existing client node with DECdfs and DECdns installed.


DFS> SHOW ACCESS /FULL access-point-name

If you specify the access point name, the command responds with a line showing the full access point name and the server node:


DFS> SHOW ACCESS /FULL .LKG.S.DFSDSK
             DEC:.LKG.S.DFSDSK on BIGVAX::DUA30:[000000]

You can use this information in a DFS mount command as follows:


DFS> MOUNT DEC:.LKG.S.DFSDSK /NODE=BIGVAX

A logical name and other qualifiers may also be specified on the mount command line.

2.3.4 Changing Access Points

Once you have created an access point, its name must always see the same information or files. On some occasions, however, you might want to remove or change an access point or change the location of the directories to which an access point refers.

Caution

Use caution when removing or changing an access point, because doing so can disrupt the user environment on client systems.

To remove an access point name, enter the REMOVE ACCESS_POINT command. This command removes the name from the server database and from DECdns. However, it does not notify client systems that currently have the access point mounted. On these systems, any subsequent attempt to use the access point will fail except for operations on files that are currently open. Client users will receive an error code identifying the failure.

2.3.5 Removing Access Points Added with the /CLUSTER_ALIAS Qualifier

Removing access points from servers in a cluster sometimes requires extra steps. The original ADD ACCESS_POINT command registers the access point name in both the DECdns namespace and the local server database. The REMOVE ACCESS_POINT command attempts to remove the name from both the DECdns namespace and the local server database. However, if you registered the access point according to its server's cluster alias (that is, the ADD ACCESS_POINT command had the /CLUSTER_ALIAS qualifier), you must perform some extra procedures to remove the access point.

The REMOVE ACCESS_POINT command deletes the DECdns access point name entry. This command also removes the access point from the server's local database, but it does so only on the cluster member at which you enter the REMOVE command. An informational message reminds you of this.

To remove an access point that was registered by cluster alias, you must use the fully expanded access point name on all cluster members except the first server on which you entered the REMOVE ACCESS_POINT command.

To display the fully expanded access point name, enter the following command:


DFS> SHOW ACCESS_POINT /LOCAL /FULL

Remove the access point on each server by entering the REMOVE ACCESS_POINT command with this fully expanded access point name and the exact punctuation. When you enter this command at the first DECdfs server, you remove the access point name from the DECdns database. Subsequent REMOVE ACCESS_POINT commands at the other DECdfs servers in the cluster generate warnings that the access point is not in the DECdns namespace, but this does not indicate a problem. When you enter the fully expanded name at each server, you remove the access point from the server's local database.

To continue serving the access point on other cluster members, you can reregister the access point by using the ADD ACCESS_POINT/CLUSTER_ALIAS command on one of the other nodes. This replaces the access point name in the DECdns namespace. Disable the incoming alias on the node (or nodes) from which you removed the access point.

For DECnet Phase IV:

Use the following NCP command to disable the incoming alias:


NCP> SET OBJECT DFS$COM_ACP ALIAS INCOMING DISABLED 

For DECnet Phase V:

Use the following NCL command to disable the incoming alias:


NCL> SET SESSION CONTROL APPLICATION DFS$COM_ACP INCOMING ALIAS FALSE

To disable the incoming alias permanently, edit the NET$SESSION_STARTUP.NCL NCL script file.

2.3.6 Maintaining Consistency with DECdns

On certain occasions, DECdns can continue to supply outdated information to other nodes about access points on your server. Each time that you enter the ADD ACCESS_POINT command, you register the new access point name with DECdns. Until you explicitly remove the name by entering a REMOVE ACCESS_POINT command, DECdns retains it. DECdns therefore contains and supplies to other nodes information about unavailable access points on your server under the following conditions:

In either case, DECdns continues to supply outdated information (the access point name and the server's DECnet address information). If a new client attempts to mount the access point, the client receives a message stating that the access point is unavailable. If a client that previously mounted the access point attempts to read or write to an open file, an error occurs and returns an SS$_INCVOLLABEL error code. If such a client attempts to open a new file or to search a directory on the client device, the client attempts mount verification (see Section 3.4.5), which then fails.

While you cannot prevent the server from being unavailable occasionally, you can prevent the loss of access points by always adding new access points to the DFS$SYSTARTUP file. If you stop the server permanently, be sure to enter a REMOVE ACCESS_POINT command for each access point on your system.

2.4 Protecting Server Files

DECdfs handles security and file access according to OpenVMS conventions, but a few differences exist. DECdfs allows any user to enter a MOUNT command, regardless of volume-level protections. However, DECdfs performs access checking at the time of file access.

The server uses proxy access to verify a user's access to an account (see the OpenVMS Guide to System Security). The server does not perform an actual proxy login, however, since DECdfs accesses a node through the DECdfs server process. The server process performs file operations on behalf of the user at the client, and it impersonates the user by performing these operations in the name of the user's proxy account. Files created on behalf of a client user are therefore owned by the user's proxy account, not by the server process's account. Section 2.2 describes more fully how the DECdfs server validates user access.

2.5 Protecting Individual Files

DECdfs allows any user at any DECdfs client to mount an access point. On the server, however, standard OpenVMS file access protection applies to each file. The OpenVMS operating system uses a combination of user identification codes (UICs), privileges, protection settings, and access control lists (ACLs) to validate each file access according to the user's proxy account.

You can allow or disallow file operations by DECdfs users by specifying one of the following identifiers in an access control entry (ACE):

The DFS$SERVICE identifier applies only to users at DECdfs clients. The NETWORK identifier applies to users at DECdfs clients and all other network users.

You can explicitly place ACLs on DECdfs files only by logging in to the server system. The OpenVMS operating system recognizes the ACLs, so you can use them from the server to protect or grant access to the server files. However, DECdfs suppresses ACLs as seen from the client. A user with access to a DECdfs client device cannot create or view the ACLs on files residing at the server. Using the SET ACL/OBJECT_TYPE=FILE or EDIT/ACL command at a client to modify a server file displays an error message. Entering the DIRECTORY/SECURITY and DIRECTORY/FULL commands returns displays that omit the ACLs on any files in the directory listing.

2.6 Managing the Persona Cache

The server uses a persona cache, which contains information about individual client users. The server uses this information to determine whether a client user has permission to access individual files. This section explains how you control the operation of the persona cache.

When incoming user requests arrive at the server, the server process interacts with the OpenVMS operating system to create or access the requested files. To perform this operation on behalf of a particular user, the server builds a profile of that user. The server checks the NETPROXY.DAT file for the user's proxy account, the SYSUAF.DAT file for the user's privileges and UIC, and the RIGHTSLIST.DAT file for any identifiers granting additional rights.

The server places all of this information in a persona block. When creating or accessing a file on behalf of the user, the server process impersonates the user according to the persona block information. Although the server process itself is interacting with the OpenVMS file system, each file appears to be accessed by, and in accordance with the privileges of, the proxy account.

The persona cache helps to accelerate file access. After the server creates an individual persona block, the server reuses it each time that user accesses another file. This saves time because the server need not reread the NETPROXY.DAT, SYSUAF.DAT, and RIGHTSLIST.DAT files at each file access.

DECdfs automatically sets the size of the cache based on the number of users. As the number of users increases, DECdfs borrows from nonpaged pool to meet the demand. When the number of users decreases, DECdfs returns unused blocks to nonpaged pool.

2.6.1 Specifying the Lifetime of Persona Blocks

Persona blocks have a specified lifetime, which you can adjust by using the SET SERVER/PERSONA_CACHE=UPDATE_INTERVAL command. When the persona block for a user expires, the server validates the user's next access by reading the three authorization files and building a new block. This ensures that, at a specified interval, the DECdfs server automatically incorporates any changes that you make to any of the authorization files.

If DECdfs users at client systems complain that the response time for opening files is too long, consider lengthening the update interval.

2.6.2 Flushing the Cache

You can flush the persona cache by using the SET SERVER/INVALIDATE_PERSONA_CACHE command. This forces the server to build a completely new cache, validating each new user access from the authorization files. You can flush the persona cache after making changes to access rights or proxy accounts without waiting for the update interval to expire.

You need to restart the server if you have replaced the RIGHTSLIST.DAT file by copying the file or changing the file's logical name assignment. You do not need to restart the server if you have replaced or copied the NETPROXY.DAT file or SYSUAF.DAT file or if you have changed either of these two files' logical name assignments.

2.6.3 Displaying Cache Counters

Table 2-1 lists and explains the counters that are available for the persona cache. To display the persona cache counters, use the following DFS$CONTROL command:


DFS>SHOW SERVER/COUNTERS

Table 2-1 Persona Cache Counters
Counter Description
Persona Blocks Active The current number of simultaneously active persona blocks.
Maximum Persona Blocks Active The highest number of simultaneously active persona blocks since the server started.
Persona Cache Blocks Allocated The current number of allocated persona blocks. This includes a count of both currently active and inactive persona blocks.
Maximum Persona Cache Blocks Allocated The highest number of allocated persona blocks since the server started. This tells how large the cache has been since the last startup.
Persona Cache Hits The number of times the server was able to reuse an existing persona block to satisfy an incoming file request.
Persona Cache Misses The number of times the server was forced to build a new persona block to satisfy a new file request.
Persona Cache Threshold The number of preallocated persona blocks that the server maintains.

2.7 Managing the Data Cache

Managing the data cache involves periodically using the server counters to monitor DECdfs performance, reassess server use, and tune the data cache parameters to maintain good performance.

The DECdfs server data cache improves performance by caching blocks of files to expedite the repeated use of files or parts of files. Many files on a system, such as command procedures or executable files, are used repeatedly. In addition, during access of a file, the same blocks in the file are often read and written many times. DECdfs stores file data in its data cache to eliminate unnecessary disk accesses. The caching takes place on both read and write requests.

To further improve performance, DECdfs prefetches subsequent blocks from files being accessed sequentially; that is, during sequential file access operations, DECdfs anticipates your needs, moving data from the disk to the cache so it is available when you actually request it.

The server's data cache is a write-through cache. It does not affect standard RMS caching, which occurs on the client system.

2.7.1 Specifying the Size of the Cache

To specify the size of the data cache, enter the following command:


DFS> SET SERVER/DATA_CACHE=COUNT_OF_BUFFERS

This command allocates a certain number of buffers from nonpaged pool to use in the data cache. The size of each buffer is fixed. Each buffer takes 8192 bytes of data plus 50 bytes of header information, for a total of 8242 bytes.

If you increase the count of buffers past the default value, increase the amount of nonpaged pool (the NPAGEDYN parameter) by a corresponding number of bytes. To do so, modify the SYS$SYSTEM:MODPARAMS.DAT file and rerun AUTOGEN (see the OpenVMS System Manager's Manual).

2.7.2 Specifying the Per-File Quota

The file buffer quota improves performance as follows:

You can specify how many file cache buffers a single file uses by entering the following command:


DFS> SET SERVER/DATA_CACHE=FILE_BUFFER_QUOTA

When a user makes an initial request for read access to a file, the server moves data from the disk to the cache. As the user continues to request read and write access to the same file, the server continues to allocate buffers to the file. Once the server reaches the quota, however, it reuses a file's buffers, beginning with the one least recently used. If that buffer is currently in use, the server ignores the quota and uses the least recently used available buffer in the cache. If no buffer is currently available in the cache, the file request waits.

If you choose to adjust the file buffer quota, consider what types of files you use with DECdfs. If users repeatedly access one large file, such as an executable file or a shared design template, a high file quota can be useful. Adjustments to this value should reflect the patterns of use at your site. To monitor the use and efficiency of the cache, use the SHOW SERVER/COUNTERS command.

2.7.3 Displaying Cache Counters

Table 2-2 lists and explains the data cache counters. To display the data cache counters, use the following DFS$CONTROL command:


DFS> SHOW SERVER/COUNTERS

Table 2-2 Data Cache Counters
Counter Description
Data Cache Full The number of times that the least recently used buffer was currently in use and a request had to wait for a buffer.
Data Cache Hits The number of times that the server was able to satisfy a read request by finding a requested block in the cache. The server therefore avoided accessing the disk.
Data Cache Misses The number of times that the server was unable to satisfy a read request by finding a requested block in the cache. The server was therefore forced to access the disk.
Data Cache Quota Exceeded The number of times that a particular file used more buffers than its specified quota.
Physical Writes The number of times that the server wrote a block to disk.
Physical Reads The number of times that the server read a requested block from disk.

Frequent high numbers for the Data Cache Full counter indicate that your server is very busy. When the cache is full and file requests wait for buffering, performance can degrade. Monitor this counter and consider raising the buffer count value if necessary.

Interpret the hits-to-misses ratio according to the application for which you use DECdfs. Sequential accesses should produce a high hits-to-misses ratio because of the prefetching DECdfs performs. Nonsequential accesses (or a very busy server with frequent reuse of cache blocks) can produce a low hits-to-misses ratio. To correct a consistently low hits-to-misses ratio, consider increasing the buffer count value by using the SET SERVER/DATA_CACHE=COUNT_OF_BUFFERS command.

The Physical Writes and Physical Reads counters indicate the number of times the server performed a disk I/O operation.

2.8 Using a Cluster as a DECdfs Server

You can make a device and directory available as an access point from a cluster system by using a cluster alias. A cluster alias serves a single access point from all cluster members when the incoming alias is enabled.

Sections 2.8.1 and 2.8.2 explain how to serve an access point from a cluster alias and from individual cluster members.

2.8.1 Serving an Access Point from a Cluster Alias

To create an access point that is registered by the cluster alias, follow these steps:

  1. Install and start the DECdfs server on each node in the cluster for which the incoming alias is enabled.
  2. Add the access point by using the /CLUSTER_ALIAS qualifier with the ADD ACCESS_POINT command. This supplies DECdns with the cluster alias instead of the node address as the access point's location.
  3. Repeat the same ADD ACCESS_POINT command on each DECdfs server node in the cluster.

After you have completed these steps, a client system that mounts the access point connects to the cluster rather than to a specific node. DECnet software at the cluster chooses the node that will serve the client. The failure of one node does not prevent a DECdfs client from mounting an access point. If the server node involved in a DECdfs communication session becomes unavailable, another cluster member can respond when the DECdfs client tries to reestablish the connection. This allows the DECdfs session to proceed with minimal interruption to the user.

2.8.2 Serving an Access Point from Individual Cluster Members

If you do not enable the cluster alias, or if you have not installed the DECdfs server software on all members of the cluster, you can still serve the same device and directory from multiple nodes. The access point, however, must have a different name on each node. The access point name simply represents an alternative route to the same device and directory.

2.9 Stopping and Starting DECdfs on Your System

Before stopping DECdfs on your system, it is important to notify users. You can determine whether users are currently accessing the server by entering the following command:


DFS> SHOW SERVER /USERS

You can determine whether DECdfs users are accessing a local client by entering the SHOW COMMUNICATION/CURRENT command and looking for active outbound connections. This procedure does not identify users by name. However, you can use the DCL REPLY command to notify those users before stopping the server.

To stop DECdfs on your system without aborting user file access, enter the DFS$CONTROL command SHUTDOWN COMMUNICATION. This allows existing communication sessions to complete but refuses new requests. With communications shut down, the following DECdfs commands do not function:

To stop DECdfs operations immediately, use the STOP COMMUNICATION command. This command immediately aborts current user file operations and stops the Communication Entity and server.

Note

For DECnet Phase IV:

If you stop DECnet---by entering the following NCP command, for example---the DECdfs communication and server ancillary control processes also stop:


NCP> SET EXECUTOR STATE OFF

For DECnet Phase V:

If you stop DECnet---by entering any of the following commands to disable the data link, for example---all connections are lost. DECdfs will be unable to establish connections to disk drives until the network is started.


NCL> DISABLE NODE 0 ROUTING
NCL> DISABLE NODE 0 NSP
NCL> DISABLE NODE 0 SESSION CONTROL

To start DECdfs on your system, run the file SYS$STARTUP:DFS$STARTUP.COM.

Note

Make sure DECnet is running before you restart DECdfs. Restarting DECnet or restarting the Communication Entity does not restart the DECdfs server; you must explicitly execute the DECdfs startup command file.


Previous Next Contents Index