This chapter contains notes about issues and known problems with the base operating system and, whenever possible, provides solutions or workarounds to those problems.
The following topics are discussed in this chapter:
The following notes apply to restrictions on using functions that support internationalization or internationalized components.
The return values of the functions iswdigit() and iswalnum() for Thai digits in the range 0xF0,...,0xF9 are false, even though these values are defined as digits in the Wototo Standard, Version 2.0 relating to the Thai language.
If two or more print jobs are sent to different queues of the same printer within a very short time, it is possible that some jobs will get blocked and cannot be printed. Restart the job using the lpc command.
Depending on the composition of the line, the vi editor that supports Thai may wrap lines before the right boundary of the screen. For a normal 24x80 screen, a line wraps if more than 80 Thai or ASCII characters are entered, even when the display width of the line is fewer than 80 columns.
Digital UNIX version 4.0 contains a new curses implementation that incorporates the following sets of programming interfaces:
These interfaces support the new curses standard and also maintain source compatibility with the curses interfaces found in the previous version of Digital UNIX. However, they do not maintain binary compatibility. Therefore, additional library files are also provided to maintain binary compatibility with existing programs
The new curses implementation can be found in these directories:
Directory | Contents |
/usr/include | curses.h, curshdr.h, term.h, unctrl.h |
/usr/lib | libcurses.a |
/usr/shlib | libcurses.so |
Libraries supporting the previous curses implementation will be found in the following directories:
Directory | Contents |
/usr/opt/lib | libcurses.a |
/usr/shlib/osf.1 | libcurses.so |
There are no header files provided for the old implementation. All programs that call curses directly will use the new header files when compiling and linking with the new libraries.
Existing binary files that were built with the old version of the curses shared library, libcurses.so, will continue to run without modification. Through the technique called versioning old binary files will select the correct shared library.
When rebuilding applications that do not call curses directly, but which link with libraries that interface with the old curses implementation, set LD_LIBRARY_PATH to the proper directory as indicated in the preceding table, or else specify the complete pathname on the compile command.
Many functions declared in curses.h and term.h are implemented both as routines and macros. In these cases, the default is to compile them as macros which execute more quickly. If you wish to execute the corresponding routines instead of using the macros, add the _NOMACROS compile-time switch.
The new curses library provides source compatibility with the old version with respect to function calls, data names, and macro names. However, the implementation of types, variables, structures and macros has changed considerably in some cases. If the existing user code takes advantage of the assumed implementation details of such elements, that program may need to be changed to work with the new curses. Elements whose implementation have changed include:
In previous implementations, the ACS definitions in curses.h were statically defined constant values. In the new implementation, the definitions are dynamically defined at run time based on the current setting of the TERM environment variable. This implementation allows terminals with extra ACS capabilities to make them available to the user while providing a set of default ACS definitions for terminals with lesser capabilities.
The implementation changes may cause compile time failures for some programs that depend on the static definitions. For example, the following declaration will not compile when it occurs at the global level:
char A = ACS_ULCORNER;
The new implementation of the curses library defines the ACS definitions at run time and requires that all assignments be made after the initscr() function has been called.
The X/Open Curses standard requires that the cbreak() function disable the ICRNL input processing flag. In the previous Digital UNIX implementation, cbreak() did not disable this flag. In applications that relied on this default behavior to advance to new lines, subsequent output lines may now overwrite the last line addressed. Those applications should now set the ICRNL flag explicitly after the call to cbreak(). Here is a sample code fragment that sets the ICRNL flag:
#include <termios.h>
struct termios tty;
tcgetattr(0, &tty); tty.c_iflag |= ICRNL; tcsetattr(0, TCSANOW, &tty);
The following notes apply to commands and utilities.
Note
This is an important note for users of vdump and vrestore
Backups made using vdump on Digital UNIX Version 4.0 cannot be restored using vrestore on earlier versions of Digital UNIX. Patches will be made available for earlier versions of vrestore to correct this problem.
Backups made using vdump on earlier versions of Digital UNIX can be restored using vrestore under Digital UNIX Version 4.0 without problems.
The following notes describe problems that may occur when using commands and utilities under certain security settings.
Programs cannot reliably inspect the permission bits in the stat structure and determine the access that will be granted to a particular user. On local filesystems, read-only mounts and ACLs can both modify the access that will be allowed, On remote filesystems, in addition to read-only mounts and ACLs, there may also be additional controls that can alter the permitted access such as:
Programs which copy files to update them, rather than updating them in place, will often not preserve ACLs. Some programs that have this problem are gzip, compress, and emacs.
The best solution for programs that need to make access decisions is for the program to use the access() call to determine what access will be granted. Note that even this may not work as the access protections of the file could be changed between the access() call and the read, write, or execute operation.
For programs which copy files, the command:
#
cp -p
will copy a file preserving ACLs and any other extended attribute (property list).
See the acl(4), and proplist(4) reference pages for more information.
This note is to clarify the interactions between the archive tools pax, tar, and cpio, and files containing property lists or Access Control Lists (ACLs). or ACLs alone.
When you extract files with the above utilities without using the -p option, and the following conditions apply:
Then, when files that do not have an associated ACL them are extracted, they will inherit the default ACL that is in force on the directory in which they are being created. This behavior was selected to allow file extractions to work as expected in as many cases as possible.
For times when the above behavior is not appropriate, an alternative behavior has been associated with the -p option. If you use the -p option, any file that is extracted from the archive will be given only the property list, ACL, and file permissions that were stored in the archive with that file. For tar archives you can use either the -p option to the tar command or the -p p option to the pax command. For cpio archives you can use the -p p option to the cpio command
See the tar(1), pax(1), cpio(1), and tar(4) reference pages.
By default, emacs will rename the original file, and save the new file as a copy, under the original name. If the original file had an Access Control List (ACL) it will now apply to the backup file. If the directory had a default ACL, the new file (original filename) will now have the default ACL instead of the original ACL. If the directory did not have a default ACL, the new file will be protected only by the file permission bits.
The emacs utility has some user-preference variables which can be set to control which file will retain the original ACL. The relevent EMACS variables are:
Since the mailx and Mail command have become XPG4 compliant, their command behavior has changed. When using a carriage return with no arguments in command mode, you no longer see the next message. A carriage return with no arguments now behaves like the print command and not the next command. The current message is displayed and the message pointer stays on the current message.
The gendisk utility is used to create product media. There is a problem in using it on FDI diskette devices, the diskette drive found on all non-Turbochannel bus Alpha platforms.
The solution involves making some hard links to the diskette device special files using the name of the device that gendisk will use.
#
cd /dev
#
ln rfd0c rfl0c
#
ln rfd0a rfl0a
#
ln fd0a fl0a
#
ln fd0c fl0c
#
fddisk -fmt /dev/rfd0c
You will see the following messages:
NOTE: Setting interleave factor to ``-i2:4''. Use ``-i<nnn>[:<ccc>]'' option to override. Disk type: 3.50 inch, HD (1.44MB) Number of sectors per track: 18 Number of surfaces: 2 Number of cylinders: 80 Sector size: 512 interleave factor: 2:4 Formatting disk... Percentage complete: Format complete, checking... Quick check of disk passes OK.
#
disklabel -wr fd0 rx23
Note
When running the gendisk utility on the diskette using these instructions, do not respond yes to the question asking to clean the disk.
The following is an example of a gendisk command session:
#
gendisk -d MYPRODUCT400 /dev/rfd0c
Generating MYPRODUCT400 Kit from <system address> on /dev/fl0c
WARNING: this will remove any information stored in /dev/fl0c. Are you sure you want to do this? (y/n): y
Do you want to clean the entire disk first? Note: This will replace your current disk label with a default one. (y/n) [n]: n
Preparing /dev/fl0c (floppy) done.
Checking /dev/fl0c /sbin/ufs_fsck /dev/rfl0c ** /dev/rfl0c File system unmounted cleanly - no fsck needed
Mounting /dev/fl0c on /usr/tmp/cd_mnt8344
Writing Images (dd=/).
Image instctrl...done. Image SVGASTATIC100...done.
Verifying Images (dd=/).
Image instctrl...done. Image SVGASTATIC100...done.
Kit MYPRODUCT400 done.
Cleaning up working directories. Unmounting /dev/fl0c
Digital ships the emacs software as it is received from the source. The following command line options do not work as documented in the emacs reference page:
-geometry | -iconic |
In some cases a solution is available using an appropriate X resource.
The /usr/opt/sterling directory tree has been renamed to /usr/opt/obsolete.
All the files that were in the /usr/opt/sterling directory have been moved to the /usr/opt/obsolete directory tree. The following files have been moved within the /usr/opt file system.
Old location of commands | New location of commands |
~sterling/sbin/cpio | ~obsolete/sbin/cpio |
~sterling/sbin/tar | ~obsolete/sbin/tar |
~sterling/usr/bin/cpio | ~obsolete/usr/bin/cpio |
~sterling/usr/bin/tar | ~obsolete/usr/bin/tar |
If you have scripts that rely on the old location, a symbolic link can be placed in /usr/opt to point to the new location as follows:
#
cd /usr/opt
#
ln -s sterling obsolete
To run the POSIX shell, the environment variable BIN_SH, must be set to xpg4. The POSIX shell is invoked when the user runs the command sh.
The POSIX shell is located in /usr/bin/posix/sh. If BIN_SH is not set to xpg4, the Bourne shell is invoked when the user runs the sh command. Relative or absolute paths are not determining factors; executing /usr/bin/sh gives the same result as sh. The determining factor is the environment variable BIN_SH.
In sendmail, the maximum hop count is now configurable. If not specified, the hop count defaults to 17. Each time a message is forwarded through a host, the hop count is increased. When this count exceeds the maximum hop count value, the message is rejected, because it is automatically assumed that an endless loop has occurred.
The default value is acceptable in most installations but you may want to increase the value if too many messages are being lost.
The current values reported by the cksum command are incorrect according to the IEEE Std 100 3.2-1992. To conform with XPG4 requirements, new calculations have been made for the checksums.
For current compatibility, the default action of the cksum command is to report the present cksum values. To obtain the new checksums, set the environment variable CMD_ENV to the string xpg4. For example:
export CMD_ENV=xpg4
The default behavior for the df command is BSD SVR4 compliant. If XPG4 compliant behavior is desired, set the CMD_ENV environment variable to xpg4. The XPG4 compliant df command takes the following syntax:
df [-eiknPt] [-F fstype] [file | file_system ...]
See the df (1) reference page for more information.
The default of the /usr/bin/echo command is compliant with the XPG4 standard. If the CMD_ENV environment variable is not set or is set to xpg4, the echo command will treat the option -n as a string. The echo command supports the -n option when the environment variable CMD_ENV is set to bsd.
In a future version of Digital UNIX, the link between awk and nawk will be removed, leaving an XPG4-compliant version of awk. You should ensure that your scripts use /usr/bin/awk in place of any other version of the command currently existing on the system.
The gawk command invokes GNUawk, the Free Software Foundation version of awk. This command has been moved out of the base subset to the Free Software Foundation subset. The oawk command has been removed.
The vmh command in the /usr/bin/mh suite is not supported in this release.
The /usr/bin/od command has the following restrictions:
The following notes apply to restrictions on using the SysMan applications.
The disk configuration manager is a new application that will allow the inspection and modification of disk attributes, partition information in particular. Type /usr/sbin/diskconfig at the command line to invoke the application and display the top level window. This note describes some restrictions on using the disk configuration manager in Digital UNIX, Version 4.0.
The diskconfig utility displays a list of disks attached to the system. This occurs implicitly upon invocation. The selection of a disk from this list presents the attributes of the disk as recorded on the disk label if a label is present and the drive is not defective. If no label is present and the drive is not defective, default information is presented. An error message is presented if the drive is defective. Much of the data displayed in the windows can be edited by the user to change the characteristics of a disk label. Known problems are as follows:
The following notes apply to the account manager, dxaccounts.
When copying user accounts via cut and paste or drag and drop, the Allow Duplicate UIDs option in the General Preferences dialog box will be honored. For example, when making a copy of user account that has a UID of 200, if the Allow Duplicate UID's check box is off (the default), the resulting copy will have a unique UID automatically generated. If the Allow Duplicate UID's check box is on, then the copy will have an indentical UID. The same rules apply to copying groups.
The account manager has the following restrictions on both base security and enhanced security (C2) systems:
Workaround: Delete the original icon after the copy has been completed.
Workaround: Set a starting value within the range using the usermod or groupmod commands:
usermod -D -x next_uid=xxx
usermod -D -x next_gid=xxx
For example, if the Minimum UID is 100 and the Maximum UID is 10000 then:
usermod -D -x next_uid=5000
causes account manager to start generating UIDs from 5000.
Workaround: Use the chown command to change the directory and files, if applicable.
Workaround: Use the copy/paste feature to copy users, groups, or templates from account manager A to B.
Workarounds: account manager correctly allows two or more system administrators to work on the same password files simultaneously. The proper file locking will occur and new accounts can be added or modified. However, the local groups file, /etc/group, and the NIS groups file, /var/yp/src/group, are written out after each group modification. Therefore, the last system administrator to make a change in a groups view window would overwite the any prior changes from a differnt system administrator. For this reason, running multiple, concurrent account manager instances is not recommended.
Warning: DtComboBoxWidget: Unable to find item to select
Workaround: None. These messages can be safely ignored.
Leading and trailing white space is not stripped from text entry areas. This could lead to confusion, for example, if a field on the Find dialog contains a space character before the desired search string. The search string would not match becuase of the spurious space character.
The following problems apply to the account manager when running on enhanced security systems.
Workaround: Change the template lock setting on the Create/Modify Template dialog screen after selecting the template by double clicking on the template icon in the Template view icon box.
Workaround: Set passwords through /usr/tcb/bin/dxchpwd or the /usr/bin/passwd command when the C1Crypt Encryption type is chosen.
Workaround: Set the C1Crypt Encryption type for the user from the Create/Modify User dialog.
Workaround: Set passwords through /usr/tcb/bin/dxchpwd or the /usr/bin/passwd command if the Minimum/Maximum password length limitation is necessary.
Workaround: To delete a user account you must do the following:
#
/usr/tcb/bin/edauth -r <user name>
Workaround: Use the following command to remove the dangling protected password database entry:
#
/usr/tcb/bin/edauth -r <user name>
Workaround: Restart the account manager to restore the former template icon. Delete the undesired template using the Delete Toolbar icon or the Edit->Delete... option from the Template view.
Workaround: Modify the copied user and change his template from default to the desired template. Note that the template reference is maintained if the user is dropped within the same view.
Workaround: Only the drag and drop method of template assignment has this problem. You can use the Create/Modify dialog box to change a single user's template or use the Modify Selected dialog box to change templates for several selected users. Both methods will correctly propagate the template's lock field.
Workaround: None.
Workaround: Please restart account manager and then delete the template.
Workaround: Manually remake the NIS maps or perform an account manager function (eg. Account Modification) that will trigger the maps to be remade. To manually remake the maps do the following:
#
cd /var/yp
#
make all
Workaround: Change the template lock setting on the Create/Modify Template dialog screen after selecting the template by double clicking on the template icon in the Template view icon box.
Workaround: Set the C1Crypt Encryption type for the user from the Create/Modify User dialog.
Workaround: Set passwords through /usr/tcb/bin/dxchpwd or the /usr/bin/passwd command if the Minimum/Maximum password length limitation is necessary.
Workaround: The user must be removed from the BSD databases by manually editing the /etc/passwd and /etc/group files. The user can be removed from the Protected database using the command:
#
/usr/tcb/bin/edauth -r <user name>
Workaround: To remove a dangling protected entry, use the command:
#
/usr/tcb/bin/edauth -r <user name>
Workaround: Restart the Account Manager to restore the former template icon. Delete the undesired template using the Delete Toolbar icon or the Edit->Delete... option from the Template view.
Workaround: Manually update the Lock field by selecting the Lock toggle button or assign templates to users through the Template pull down list of the Create/Modify User dialog.
Workaround: Manually assign the template to the user through the Template pull down list of the Create/Modify User dialog.
Workaround: Restart the account manager and then delete the template.
Workaround: Manually remake the NIS maps or perform an Account Manager function (eg. Account Modification) that will trigger the maps to be remade.
When using System Information, dxsysinfo, the swap warning Light will not illuminate if the available swap space falls below 10 percent free, unless the available swap meter is being displayed. Both of these options can be activated by selecting them from the View menu.
The value of Percent Full in the file system area of dxsysinfo may be inaccurate.
For the correct value of Percent Full, use the df command and refer to the Capacity value.
The dxsysinfo application may display /dev/prf as a tape device depending on the subsets installed on the particular machine. Also, when Update Devices is selected, another /dev/prf icon may be added to the device area.
The dxshutdown application does not create the /etc/nologin files as described in the documentation. This means that users will be able to login to a machine that is being shutdown up until the actual time of the shutdown.
Note, this behavior differs from that of the shutdown command which creates the /etc/nologin file at 5 minutes prior to the shutdown.
The Print Configuration Manager may have some problems with /etc/printcap files from DEC OSF/1 Version 3.2 or earlier, as follows
Using /etc/printcap files in the current version of Digital UNIX, the system assigns printers names lp[0-9]*, [0-9]*, and for the default printer, lp. For example, the default printer may have a name field such as lp0|0|lp|default|declaser3500:.... Another printer may be named lp7|7|some_alias|another alias:.... Therefore, the system has difficulty with printers that have less than two names or that use these reserved names as aliases.
Some of the attribute value checking is different between earlier versions and the current version. For example, some fields that were not required now are, and some attributes values that were legal no longer are.
The Print Configuration Manager requires that all comments be associated with a printer. As a result, comments appearing after the last printer are truncated.
To avoid these problems, invoke the printconfig utility with the menu interface (printconfig -ui menu). This brings up the lprsetup utility which is fully compatible with earlier printcap files.
The following notes apply to system administration.
The following notes apply to the use of enhanced security features.
This note covers problems that may occur when distributing of enhanced security profiles via NIS.
YPPUSH=$(YPDIR)/yppush -p 6
This example allows up to six simultaneous transfers to NIS slave servers.
This can be addressed by ensuring that all the NIS slaves have these maps by a procedure like the following (to be executed on each slave server which does not yet have these maps):
#
/var/yp/ypxfr -d <domainname> -h NISMASTER -c prpasswd
#
/var/yp/ypxfr -d <domainname> -h NISMASTER -c prpasswd_nonsecure
In the above, substitute the name of the local NIS master server for the NISMASTER token. This will transfer initial copies of those maps for those slave servers.
#
make passwd prpasswd
exceeds 25 seconds, then only one user will succeed in logging in at a time. Internal testing has demonstrated that 4000 profiles and infrequent logins (3 at a time) can work, but even fewer profiles can be accommodated if bursts of nearly-simultaneous logins are frequent.
Because the user profiles and ttys information are now stored in database files, the previous recovery method of editing the files while in single-user mode is no longer available. However, as long as the /usr (and, if separate, /var) filesystems are mounted, the edauth(8) utility can be used in single-user mode to edit extended profiles and ttys database entries.
If the /etc/passwd file is somehow lost, but the extended profiles are still available, then a command sequence like the following can be used to recover some of the missing data:
# bcheckrc # /tcb/bin/convuser -dn | /usr/bin/xargs /tcb/bin/edauth -g | \ sed '/:u_id#/!d;s/.*:u_name=//;s/:u_id#/:*:/;s/:u_.*$/:/' \ >psw.missing
This will create a psw.missing file containing entries like the following:
root:*:0:
Primary group information, finger information, home directory, and login shell are not recorded in the extended profile. The data for those fields must be recovered by other means.
Enhanced security will not allow usernames longer than the documented maximum of 8 characters.
For this release, the LSM product and AdvFS addvol utility are not supported. Also, not all platforms and tape drives support bootable tape. Supported processor platforms are:
Supported tape devices are:
You should be aware of the following disk space issues before you use the btcreate command:
If you want to keep all the subsets along with all the kernel options, do the following to make extra space. Note, the examples in the following procedure are for UFS. For AdvFS, use the mkfdmn and mkfset commands to create new partitions and mount them.
#
newfs /dev/rz1d
#
cd /usr/sys
#
mkdir FLAWLESS.BOOTABLE
#
mount /dev/rz1d /usr/sys/FLAWLESS.BOOTABLE
#
newfs /dev/rz1b
#
mount /dev/rz1b /mnt
#
cp * /mnt
#
umount /mnt
#
mount /dev/rz1b /usr/sys/bin
After completing the these steps, start btcreate. If you are using AdvFS, the /usr/sys/bin file system must be dumped during btcreate in order to copy the entire contents of the /usr file system.
After restoring your system from bootable tape, you must set the swap space at the bootable tape single-user mode as follows:
#
mount -u /
#
cd /etc
#
echo "/dev/rz3b swap1 ufs sw 0 2" >> fstab
#
disklabel -s rz3b swap
After you complete these steps, shut down and reboot the system from the restored disk.
If the restored disk already contains a file system that has not been touched during the btextract process, do the following to see and use that partition:
#
disklabel -s rz1d 4.2BSD
#
mount /dev/rz1d /yourfs
To use a tape drive with any system, make sure that the kernel has been built with the tape drive attached to the system. Otherwise, you get dump errors and the system cannot boot from the tape.
The initial release of Multimedia Services for Digital UNIX Version 2.0 required several modifications to fully support Version 4.0 and Version 4.0 components. As a result, Multimedia Services for Digital UNIX Version 2.0A is shipped on the Digital UNIX Associated Products Volume 1 CD -ROM. The Multimedia Services Version 2.0A runtime should be installed instead of the Version 2.0 runtime. The Multimedia Services Version 2.0 development kit, not distributed with the Digital UNIX Version 4.0 distribution, can be used with the Multimedia Services Version 2.0A runtime kit. See the release notes for Multimedia Services for Digital UNIX Version 2.0A for more details. The release notes can be found on the CD -ROM as
DOCUMENTATION/HTML/MME201_RELNOTES*
The settime utility is called twice at boot time if an update installation was performed.
To prevent this happening on subsequent reboots, remove the link:
#
rm /sbin/rc3.d/S05settime
When switching between two different boot disks that are running different versions of Digital UNIX, the system clock may lose time.
See Section 1.4.5 for a full description of the problem.
In this release, osf_boot supports the booting of symbolically linked kernels. For example, assume you have /tmp/vmunix symbolically linked to ../mdec/vmunix as follows:
lrwxrwxrwx 1 anyuser system 14 Dec 6 22:41 /tmp/vmunix -> ../mdec/vmunix
In this case, osf_boot will detect the link and boot /mdec/vmunix as follows:
Digital UNIX boot - Wed Dec 6 17:02:04 EST 1995
symbolically linked kernel detected: Loading /mdec/vmunix ... Loading at fffffc0000230000 Current PAL Revision <0x4000000010530> Switching to OSF PALcode Succeeded New PAL Revision <0x4000000020123>
Do not add swap devices to a heavily loaded symmetric multiprocessing (SMP) machine by using the swapon /dev/rzxx command. Instead, add the device information to the /etc/fstab file and reboot the system.
To enhance the ability to detect certain SCSI disk device errors, additional event logging now occurs. However, these events can also occur during normal operation of the system. The events are known as SCSI unit attention events. To distinguish between normal events and abnormal events, the context within the event as well as the surrounding events must be considered. These entries begin as follows:
----- CAM STRING -----
ERROR TYPE Soft Error Detected (recovered)
The logging of these events will be associated with the first access after the event has occurred. First examine the ASC and ASQ values of the packet:
----- ENT_SENSE_DATA -----
ERROR CODE x0070 CODE x70 SEGMENT x00 SENSE KEY x0006 UNIT ATTN INFO BYTE 3 x00 INFO BYTE 2 x00 INFO BYTE 1 x00 INFO BYTE 0 x00 ADDITION LEN x0A CMD SPECIFIC 3 x00 CMD SPECIFIC 2 x00 CMD SPECIFIC 1 x00 CMD SPECIFIC 0 x00 ASC x29 <<<<<<<<<<<<<<<<<< ASQ x02 <<<<<<<<<<<<<<<<<< FRU x00 SENSE SPECIFIC x000000
If the ASC is not x29, this is a device event or error (the SCSI specification contains the details of the meaning of the error). For entries with the ASC of x29, there are several possibilities for newer SCSI devices (older devices do not report these types of events):
Events of ASQ x02 or x03 should also contain other related events in the error log (such as, a bus reset, or a device reset) for the specified bus or device. These types of events are just informational and do not indicate any type of failure on the system.
Events of ASQ x01 are normal events when they occur as part of switching on the system. The first access after the system is switched on creates one of these events. If these events occur at times other than the first access after the system is switched on, then it may indicate a problem within the device that is causing it to restart its firmware.
Also note that not all SCSI disks are capable of reporting this information.
If a printer is connected to multiple queues through a LAT or a local tty port and different jobs are submitted to different queues within a short period, some of the jobs may be lost. If this happens, resubmit the print request.
The following notes apply to network and communications software.
When using bind() on a UNIX domain (AF_UNIX) socket, the default modes of the socket have changed in Digital UNIX version 4.0. Previously, the mode of a newly created socket was always 0777, regardless of the value of the creating process's umask. In this release, this behavior has changed so the the mode of a newly created socket is as follows:
(0777 &~ umask)
The previous behavior (0777, regardless of umask) may be restored by setting the kernel configuration flag insecure_bind to a value of 1. This can be done by either or both of the following two methods:
generic: insecure_bind = 1
Then you must reboot the kernel.
#
sysconfig -r generic insecure_bind=1
The following changes and restrictions apply to ATM
The command syntax for the atmarp command has changed due to Multiple LIS support. The atmconfig command has added many new options mainly due to CBR (Constant Bit Rate) support. Lastly a new command, atmsig, is now required in all ATM startup scripts. Therefore all ATM startup scripts including /etc/atm.conf will need to be modified. See the reference pages associated with atmconfig, atmarp, and atmsig for further details.
When the ATM end system is connected to a Digital Gigaswitch, that is running Version 1.3 (or less) of the Gigaswitch software, the following line specifying the useesi keyword is required:
atmconfig up driver=lta0 useesi=1-4 wait
The lockd daemon will need to be restarted after ATM is brought up. This is because the lockd daemon takes a one time look at the IP interface list. ATM interfaces such as LIS0 are dynamically added.
If UNI signalling is disabled with the atmsig down command, it will not be correctly restarted by an atmsig up command. Following a command such as:
atmsig down driver=ltaX
You must do the following to successfully restart signalling:
atmconfig down driver=ltaX
atmconfig up driver=ltaX
atmsig up driver=ltaX
Your system may crash when the user stack limits are increased beyond 2 gigabytes in the /etc/sysconfigtab file and more than 2 gigabytes are accessed by an application. The crash happens at some time after the location beyond 2G is accessed, typically when the system is paging. The default stack limits are below 2G and therefore this is a problem only if the stack limits have been increased.
The solution is to reduce the stack limits in the /etc/sysconfigtab and reboot. The application will then receive a fatal error when accessing stack above the specified limit.
The IFNET paradigm allows the bridging of streams device drivers to sockets. This release supports SVR4 streams, but the IFNET paradigm is not fully supported. IFNET is only supported over the ln ethernet interface and the number of ln devices supported is limited to two devices.
This release does not support Orderly Release in XPG4 XTI (default XTI interface). It is still available for users of XPG3 XTI. See the Networking Programmer's Guide for information on using XPG3 XTI.
When you restart the network using the following command:
#
/usr/sbin/rcinet restart
The ifconfig command is run by the /usr/sbin/rcinet script. This will clear and reset the primary network interface address.
Network interfaces with configured interface aliases use the alias address as a source address for outgoing packets. Resetting the primary network interface address can cause a problem for systems with a firewall or proxy-access configuration based on the primary address. Generally, alias addresses are not in the access control lists in such systems.
To avoid this problem, you can use one of the following solutions:
#
ifconfig <if_w_aliases> down delete
When restarting the network using netsetup, an error message similar to the following will be displayed:
kill: 204: no such process
This problem also exists when running the following commands:
#
rcinet stop
#
rcinet restart
The message is incorrect and has no effect on your system.
The Common Desktop Environment (CDE) provides facilities and features for applications to communicate in a networked environment. After the network is configured and enabled, these features become available each time a new desktop session is started. After a desktop session has started, the current session has a static dependency on the state of the network configuration. Network and system administrators should be very cautious about dynamic changes to the network configuration while in a network aware desktop session.
Prior to making any dynamic network changes, such as changing the state of your network adapter to off or changing your primary network address, add the following entry to the /.dtprofile file:
export DTNONETWORK=true
The system administrator must then log out and back in as root for the change to take effect. This change removes the dependency on the state of the network. Failure to do this may result in a session hanging after clicking on a CDE icon, such as the screen lock or Exit icons.
After all network changes are completed, remove the export DTNONETWORK=true entry from the /.dtprofile file.
The following notes apply to Local Area Transport (LAT).
The latsetup utility sometimes creates devices with duplicate minor numbers. If you manually create LAT BSD devices that do not match the valid BSD tty name space convention, latsetup can create devices with duplicate minor numbers. For example, creating device tty0 with a minor number 2 instead of 1 can cause this problem.
When a CTRL/A character is typed during a LAT tty session, all lowercase characters are converted to uppercase. Another CTRL/A will change the mode back to normal.
When doing a number of simultaneous llogin connections it is recommended to use llogin with the -p option. To speed up an llogin connection, it is also recommended to add the target host name as a reserved service.
It is no longer necessary to build LAT into the kernel. LAT is not made a mandatory kernel option upon selecting the LAT subset and will not appear in the kernel configuration file. As LAT requires the Data Link Bridge (DLB), it is still necessary to build DLB into the kernel when using LAT.
The default behavior upon booting to multi-user mode is for LAT to be dynamically loaded into the running kernel. If LAT is not started at boot-time via the /sbin/rc3.d/S58lat script, the recommended method for starting and stopping LAT is to verify that LATSETUP is enabled in /etc/rc.config and execute the /usr/sbin/init.d/lat program, using the start or stop options.
In the slave-only Version 2.0 LAT implementation, service group codes are used in solicitation messages for host-initiated connections,
Many users rely on the ability to control access to server ports by changing the group codes of the locally offered services. Although it is contrary to the recommendations of the LAT protocol, this behavior is once again supported. Outgoing port group codes, normally used for this purpose, continue to be used in all other cases where they are required by the protocol.
If the shutdown -r command is executed when there are LAT login sessions with active background processes, the shutdown program appears to stall. The workaround for this problem is to halt LAT (using the latcp -h command) either before executing the shutdown command or after it has stalled.
The notes in this section apply to file systems.
For an NFS client to make direct use of ACLs or extended attributes (property lists) over NFS the proplistd daemon must be enabled on an NFS server. The proplist mount option must be used when mounting on the client. Access checks will be enforced by the server in any case, although NFSv2 client caching could sometimes cause inappropriate read access to be granted. Correctly implemented NFSv3 clients will make the necessary access checks.
Start the proplistd daemon by selecting the number of proplist daemons to run when you use the nfssetup utility. You can also start the daemon manually with the proplistd command. For example:
#
/usr/sbin/proplistd 4
On the client, the filesystem must be mounted with the proplist option by either of the following methods:
sware1:/advfs /nfs_advfs nfs rw,proplist 0 0
#
mount -o proplist sware1:/advfs /nfs_advfs
See the acl(4), fstab(4), proplist(4), mount(8), nfssetup(8), and proplistd(8) reference pages for more information. Note that the proplist option is not documented in mount(8).
On AdvFS filesystems there is a hard limit of 1560 bytes for a property list entry. Since Access Control Lists (ACLs) are stored in property list entries, this equates to 62 ACL entries in addition to the 3 required ACL entries. The error EINVAL will be returned if you attempt to exceed this limit.
To facilitate interoperation of the UFS and AdvFS ACLs, a configurable limit has been imposed on UFS ACLs. The default value of the UFS limit is 1548 bytes, equivalent to the 65 entry limit on AdvFS. The UFS configurable limit on ACLs has been added to the sec subsystem and has been given the attribute name ufs-sec-proplist-max-entry. The attribute can be dynamically configured using the sysconfig utility or by setting the attribute in the file sysconfigtab.
A configurable property list element size for UFS has also been added to the sec subsystem and has been given the attribute name ufs-proplist-max-entry. The value of ufs-proplist-max-entry must be larger than ufs-sec-proplist-max-entry by enough space to hold a property list element header. Adjustment of ufs-proplist-max-entry to achieve this is done automatically by the sysconfig utility. The default value of ufs-proplist-max-entry is 8192 bytes.
See the cfgmgr(8), seconfig(8), seconfigdb(8), and sysconfigtab(4) reference pages for more information.
The following notes discuss features, problems, and restrictions of the POLYCENTER Advanced File System (AdvFS).
Note
This is an important note for users of vdump and vrestore
Backups made using vdump on Digital UNIX Version 4.0 cannot be restored using vrestore on earlier versions of Digital UNIX. Patches will be made available for earlier versions of vrestore to correct this problem.
Backups made using vdump on earlier versions of Digital UNIX can be restored using vrestore under Digital UNIX Version 4.0 without problems.
Under some circumstances, AdvFS can panic with the following message:
log half full
This can occur when a big percentage of a very large file is truncated and the fileset containing the file has a clone fileset. Truncation occurs when an existing file is overlaid by another file, and explicitly by the truncate system call.
The same will happen if very large, very fragmented files are migrated. Migration occurs when the balance, rmvol, and migrate AdvFS utilities are run. Files with greater than 40000 extents are at risk, unless the transaction logsize is increased as follows:
The command
#
showfile -x <filename> | grep extentCnt
will indicate how many extents a file is using. Backup and restore will help to defragment files as will copying (not moving with mv) a file to another name. However, if the fileset has a clone and if a large file is truncated as a result of the copy, the truncation panic could occur.
On AdvFS filesystems there is a hard limit of 1560 bytes for a property list entry. Since Access Control Lists (ACLs) are stored in property list entries, this equates to 62 ACL entries in addition to the 3 required ACL entries. The error EINVAL will be returned if you attempt to exceed this limit.
To facilitate interoperation of the UFS and AdvFS ACLs, a configurable limit has been imposed on UFS ACLs. The default value of the UFS limit is 1548 bytes, equivalent to the 65 entry limit on AdvFS. The UFS configurable limit on ACLs has been added to the sec subsystem and has been given the attribute name ufs-sec-proplist-max-entry. The attribute can be dynamically configured using the sysconfig utility or by setting the attribute in the file sysconfigtab.
A configurable property list element size for UFS has also been added to the sec subsystem and has been given the attribute name ufs-proplist-max-entry. The value of ufs-proplist-max-entry must be larger than ufs-sec-proplist-max-entry by enough space to hold a property list element header. Adjustment of ufs-proplist-max-entry to achieve this is done automatically by the sysconfig utility. The default value of ufs-proplist-max-entry is 8192 bytes.
See the cfgmgr(8), seconfig(8), seconfigdb(8), and sysconfigtab(4) reference pages for more information.
The vdump and vrestore commands do not have the same functionality as dump and restore commands.
When a file is renamed between different level dumps, the file is not backed up on the later dump; the vdump command assumes it was backed up on the earlier dump.
When a file is deleted between different level dumps, the file is restored during the restore process.
When a directory entry changes type (for example, a file becomes a directory) between different level dumps, during the restore of the higher level dump, you get an error message saying it cannot restore the file. The work around on is to remove the file that changed types between the different levels of restore.
Running the verify commmand with the -F flag causes some recovery to be done on the domain before the attempt to mount it.
Avoid using the rmfset utility on busy domains. If you attempt to remove a fileset using the rmfset command and the target domain is experiencing a lot of I/O, the rmfset operation may hang.
The vdump and vrestore utilities correctly save and restore AdvFS sparse files. In previous versions, the holes in the sparse files were allocated disk space and filled with zeros. Note that sparse files that are striped are still handled as in previous versions.
AdvFS will now verify at mount time that all of the data in all of the volumes in a domain can be accessed. It does this by attempting to read the last block in each volume as specified by the disk label that was in use at the time that the volume was added to the domain. If it cannot read that block, it attempts to read the last block that AdvFS has marked as being currently used to hold data. If AdvFS cannot read the last in-use block for any volume in the domain, the mount will fail. If it can read the last in-use block but cannot read the last block as specified by the disk label, the mount will succeed but in read-only mode.
One reason that the last block may not be able to be read is that a disk may be mislabeled on a RAID array. The user should check the labels of the flagged volumes in the error message. If the disk label is incorrect, the user can repair the domain in one of the following ways.
Before attempting corrective action, you should back up all filesets in the domain. The corrective action depends on the state of the domain. If the domain consist of multiple volumes and has enough free space to hold two entire volumes, it is possible to remove the offending volumes one at a time, fix the disk label and add them back to the domain. Perform the following operation on each of the failed volumes:
Step 4 is important. If the domain is not balanced after adding the corrected volume, the user runs the risk of filling up one of the incorrectly labeled volumes and inducing an I/O error.
If the domain's free space is less than two volumes, you should back up all the filesets in that domain, remove the domain, fix the disk labels of the volumes, and rebuild the domain. Then restore the filesets from the backups.
Another example of why a mount might fail in this way would be that an LSM volume upon which an AdvFS domain resides has been shrunk from its original size.
When log or metadata write errors occur (for example, due to a disk failure or media error) AdvFS initiates a domain panic rather than a system panic on any non root file domains. A domain panic prevents further access to the domain, but allows the filesets in the domain to be unmounted.
When a domain panic occurs, a message is displayed in the following format:
AdvFS Domain Panic; Domain <name> Id <domain_Id>
For example:
AdvFS Domain Panic; Domain cybase_domain Id 2dad7c28.0000dfbb
After a domain panic, use the mount command to list all mounted filesets then use umount to dismount all filesets in the domain specified in the error message. You can then take the necessary steps to correct the hardware problem. After you have corrected the hardware problem, it is recommended that you run the verify command (the domain structure checker) on the domain before remounting it. This will determine if the write error compromised the domain.
The UFS and AdvFS user and group quota commands have been consolidated. The standard UFS quota commands can now be used to manage user and group quotas on AdvFS. The following list identifies the old and new AdvFS quota commands:
Old AdvFS Command | New Consolidated Command |
vquotaon | quotaon |
vquotaoff | quotaoff |
vquotacheck | quotacheck |
vquota | quota |
vquot | quot |
vrepquota | repquota |
vedquota | edquota |
vncheck | ncheck |
AdvFS quota functions have not changed. Functional differences between UFS and AdvFS quotas exist and are described in the reference pages for the consolidated commands.
The /sbin/init.d/quota script now checks and enables quotas for both AdvFS and UFS. This script runs during system initialization to stop or start user and group quota enforcement.
Support for the existing AdvFS versions of the quota commands will continue for some time. Future versions of AdvFS will drop the unique quota commands. Until then, both versions of the quota commands will work.
While AdvFS supports an unlimited number of filesets per system, the number of filesets that can be mounted at one time is limited to 512 minus the number of active file domains. For example, if a system has three active domains, up to 509 filesets can be mounted at the same time.
If a disk has a partition erroneously labeled AdvFS that overlaps a UFS partition, a file system check and repair operation (fsck and ufs_fsck) will fail on a partition that overlaps the AdvFS partition. The solution is to relabel the AdvFS partition on the disk.
Conversely, if a disk partition that overlaps AdvFS is erroneously labeled UFS, an AdvFS file system check and repair operation, verify, will fail on a partition that overlaps the UFS partition. To correct the problem, relabel the UFS partition.
AdvFS has the following known problems and restrictions:
You can reuse a partition that was previously part of an AdvFS domain. However, before you reuse the partition, you must remove the domain on the partition you want to reuse. Remove the entire domain by using the rmfdmn command. After the unused domain is removed, you can create a new domain on the partition.
Support for extended attributes (vfs+) in AdvFS is limited to data elements of 2KB or less. Application programs attempting to set larger attributes will receive an error return value.
On systems with domains that contain very large numbers of files (over 5000), the standard AdvFS metadata extent page allocation may be inadequate. As a result, an incorrect error message is displayed as follows:
out of disk space
To avoid this problem, AdvFS provides two ways to configure your file domain to handle large numbers of files. You can use the mkfdmn command with the -x or -p flags to create a file domain. Then, if the file domain is extended beyond one volume, use the addvol command with the same flags.
See the mkfdmn reference page for complete details on using these flags. A table is included in the reference page to indicate the number of extents required for the number of files in the file domain.
If you attempt to mount a UFS file system while in single-user mode on a system that is configured with AdvFS as root, the following error will occur:
Error checking for overlapping partitions: Invalid MSFS fileset name (root_device) in mounttab.
To manually mount a UFS file system while in single-user mode on a system with an AdvFS root, you must perform a mount update on the root file system. Use the following command:
#
mount -u /
You can then mount any UNIX file systems.
Under certain conditions, the disk usage information on an AdvFS file system may become corrupted. To correct this, turn on quotas in the /etc/fstab file for the affected file system, and then run the vquotacheck command on the file system. This should correct the disk usage information.
Any attempt to enable shelving for an AdvFS fileset using the hierarchical storage manager (HSM) mkefs command results in the following error message:
Can't get current fileset shelving info - ENOT_SUPPORTED (-1041)
Also, any attempt to mount an existing AdvFS fileset that already has shelving enabled results in the following error message:
AdvFS mount - shelving not supported
To access an existing AdvFS fileset that already has shelving enabled, restore the data into another fileset that does not have shelving enabled.
Both the fstat(2) reference page and the /usr/include/sys/stat.h file inaccurately state that the combination of the st_dev and the st_ino fields of the stat structure create a unique file identifier through time. Because the st_ino value can be reused, the only way to create a unique file identifier is the combination of the st_ino, st_dev, and st_gen fields. While this is true in UFS, it is even more important in AdvFS, which recycles the st_ino values rapidly.
If an NFS server is exporting an AdvFS directory and the client to which it is exporting crashes, upon reboot of the client, the following error message may appear on the console of the server:
lockd : can't clear lock after crash of client client_name : invalid argument.
This message does not disrupt NFS operations.
When a large percentage of a very large file is truncated and the fileset containing the file has a clone fileset, the system will panic with a log half full error message. Truncation happens when an existing file is overlaid by another file or when the truncate system call is made.
The panic also occurs if very large, very fragmented files are migrated. Migration occurs when you run the AdvFS balance, rmvol, and migrate utilities. Files with greater than 40,000 extents are at risk. To determine how many extents a file is using, enter the following command:
#
showfile -x
filename | grep extentCnt
The backup, restore, and copy commands tend to defragment files. The mv command does not. However, if the fileset has a clone and a large file is truncated as a result of the copy, the log half full panic again could result.
Logical Volume Manager (LVM) support is being retired in this release of Digital UNIX and there will be no further support of LVM.
All volume management functions are provided by the Logical Storage Manager (LSM). LVM functions are disabled with the exception of the support necessary to encapsulate LVM volumes under LSM. You must encapsulate any LVM volumes under LSM to maintain access to any data in such volumes. In a future release of Digital UNIX, encapsulation support will be dropped and any data still under LVM control will be lost.
LVM volume groups can be encapsulated to the rootdg diskgroup. Attempting to encapsulate an LVM volume group to any other LSM disk group fails.
You can encapsulate an LVM volume group to a non-rootdg diskgroup by performing the following procedure:
#
/usr/sbin/vollvmencap -g lvmdg1 /dev/vg1
The vollvmencap command creates the LSM scripts in the /etc/vol/reconfig.d/lvm.d directory.
#
cd /etc/vol/reconfig.d/lvm.d
#
mv dg vg1
Then change the current working directory to root.
When you execute the /sbin/vol-lvm-reconfig command, an error message is displayed; ignore this message.
For more information, see the Logical Storage Manager manual.
The following notes describe problems and restrictions of the Logical Storage Manager (LSM).
Physical block 0 on Digital disks is typically write-protected by default. If a disk is added to LSM by using the voldiskadd utility, physical block 0 is skipped. However, if a partition that includes physical block 0 is encapsulated into LSM by using the volencap, vollvmencap, or voladvdomencap utility, physical block 0 is not skipped. This is not a problem because the file system already skips block 0 and does not write to it.
A problem can occur when an LSM volume that contains a write-protected block 0 is dissolved and its disk space is reused for a new purpose. Neither the new application nor LSM know about the write-protected physical disk block 0 and a write failure can occur.
To fix this problem, use the following steps to remove the write-protected physical disk block 0 from the LSM disk before it can be assigned to the new volume:
When an LSM mirror is created using a disk that is configured as Just-a-Bunch-of-Disks (JBOD) off of either the SWXCR-P or SWXCR-E RAID controllers, a disk failure requires that you reconfigure the disk on the controller. The disk is in an unusable state once it is set to off line by the controller and cannot be used by LSM until it is reconfigured. Refer to the StorageWorks RAID Array 200 Subystem Family Installation and Configuration Guide.
If you install LSM by using the setld utility after you originally install Digital UNIX, you must rebuild the system kernel to enable LSM.
To rebuild the kernel, run the doconfig utility with no command flags. Note that the doconfig menu display does not include LSM. However, the doconfig utility will build a kernel that includes LSM. Refer to the LSM Installation documentation for more information.
Only LUN 0 is supported as a boot device by the console. Hence, the LSM rootvol and swapvol volumes can be mirrored only to LUN 0 in an HSZ. Therefore, when you use the volrootmir script to mirror rootvol and swapvol, use only an LUN 0 on an HSZ as an argument to the volrootmir script.
If you use the LSM rootvol volume for the root file system and the swapvol volume is in use as a primary swap volume, LSM adds the following entries to the /etc/sysconfigtab file to enable rootability:
lsm: lsm_rootvol_is_dev=1 lsm_swapvol_is_dev=1
If these entries are deleted or if the /etc/sysconfigtab file is deleted, the system will not boot. If this happens, you can boot the system interactively as follows:
>>>
boot -fl i
......... .........Enter kernel_name option_1 ... option_n: vmunix
After the system boots, edit the /etc/sysconfigtab file and add the LSM entries as shown above. Reboot the system for the changes to take effect.
LSM Volumes enabled with Block Change Logging (BCL) requires two or more log subdisks that are at a minimum one sector long. If you intend to use BCL on any volumes that do not already have logging subdisks, Digital recommends that you allocate at least two sectors to each log subdisk.
Any volumes that currently use single sector logging subdisks will continue to work correctly. However, Digital recommends that you reconfigure as soon as convenient to avoid being forced to do so at a later date.
Implementing these recommendations now will make the transition to new requirements in future releases easier.