This chapter provides information that you must be aware of when working with DIGITAL UNIX 4.0A and TCR 1.4A Patch Kit-0008.
The following storage space is required to successfully install this patch kit:
Temporary Storage Space
A total of ~250 MB of storage space is required to untar this patch
kit.
It is recommended that this kit not be placed in the
/
,
/usr
, or
/var
file systems because this may unduly
constrain the available storage space for the patching activity.
Permanent Storage Space
Up to ~46 MB of storage space in
/var/adm/patch/backup
may be required for archived original files if you choose to install and
revert all patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~47 MB of storage space in
/var/adm/patch
may
be required for original files if you choose to install and revert all patches.
See the
Patch Kit Installation Instructions
for more
information.
Up to ~94 KB of storage space is required in
/var/adm/patch/doc
for patch abstract and README documentation.
A total of ~105 KB of storage space is needed in
/usr/sbin/dupatch
for the patch management utility.
Temporary Storage Space
A total of ~250 MB of storage space is required to untar this patch
kit.
It is recommended that this kit not be placed in the
/
,
/usr
, or
/var
file systems because this may unduly
constrain the available storage space for the patching activity.
Permanent Storage Space
Up to ~51 MB of storage space in
/var/adm/patch/backup
may be required for archived original files if you choose to install and
revert all patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~52 MB of storage space in
/var/adm/patch
may
be required for original files if you choose to install and revert all patches.
See the
Patch Kit Installation Instructions
for more
information.
Up to ~1019 KB of storage space is required in
/var/adm/patch/doc
for patch abstract and README documentation.
A total of ~120 KB of storage space is needed in
/usr/sbin/dupatch
for the patch management utility.
The following sections describe new features of
dupatch
.
Patches for ASE and TCR are now installed, removed, and managed through
dupatch
.
The ASE and TCR patch kits have been converted to
dupatch
-based patch kits and distributed in the same patch distribution
as the applicable operating system.
The multi-product support within
dupatch
is most
visible when installing or removing patches.
dupatch
will
display a list of the products which are on the system and in the patch kit,
allowing the user to select one or more products before proceeding with patch
selections.
You must load the new patch tools provided in this patch kit. See the Patch Kit Installation Instructions for more information.
Since all prior ASE and TCR patches have been installed manually, you must set the system patch baseline. See the Patch Kit Installation Instructions for detailed information.
The
dupatch
utility now manages patch dependencies
across the DIGITAL UNIX operating system, ASE, and TCR patch kits.
An example
of patch cross-product dependency handling for a system with both DIGITAL
UNIX 4.0A and TCR1.4A installed follows:
If a DIGITAL UNIX 4.0A Patch 1.00 is chosen for installation
and it depends upon TruCluster 1.4A Patch 17.00 which is not already
installed or chosen for installation, the
dupatch
installation precheck will warn you of the dependency and block
the installation of the DIGITAL UNIX 4.0A Patch 1.00.
If the patch selections are reversed,
dupatch
will
still warn you and block installation of the chosen patch.
The format and content of the per-patch special instructions has been
revised to make it easier to use.
The special instructions are now displayed
when patches are removed.
The per-patch special instructions are viewable
through the
dupatch
documentation menu.
The patch tracking and documentation viewing features within dupatch can now be used in multi-user mode by non-root users. See the Patch Kit Installation Instructions for more information.
From the
dupatch
patch tracking menu you can now
list the patch kits from which patches installed on your system originated.
The system patch baselining feature of
dupatch
has
been improved.
Phase 4 now reports all missing or unknown system files regardless
of their applicability to the patch kit.
This will help you identify the
origin of manually changed system files.
See the
Patch Kit Installation Instructions
for more information.
The
dupatch
command line mode contains the following
new switches:
The
-product
switch must be used when you
specify the
-install
or
-delete
switches when the target system has more than one installed product that is
on the kit (such as DIGITAL UNIX, ASE, and TCR).
This switch allows you to
specify the product name which the rest of the patch operations will
affect.
The
-product
switch must precede the
-patch
switch on the command line.
See the
Patch Kit Installation Instructions
for more information.
A
-nolog
switch has been added to enable
you to turn off session logging,
The
-version
switch is no longer used for
delete.
Using this switch will cause an error and the help information
will be displayed on the screen.
Any error on the command line will cause the help information to be displayed on the screen.
If any mandatory switch is missing when using the command line interface,
the command fails with the appropriate usage message.
Once you select the
command line interface,
dupatch
will not go into interactive
mode.
Prompting is no longer mixed with the command line interface.
The new
dupatch
will work with older revisions of
dupatch
-based patch kits.
The older revisions of
dupatch
, however, rev 15 and
lower, do not know how to install, remove, or manage patches from the new
style patch kits.
Please ensure that you load the new patch installation tools
when you receive this patch kit.
See the
Patch Kit Installation Instructions
for more information.
An
fgrep
message may appear while installing all
the patches as nonreversible, or while update installing a patched system
to a later release; for example, V4.0D.
fgrep: input too long
This patch modifies
/etc/ddr.dbase
and
/etc/ddr.db
.
A copy of the original files should be made before
installing this patch.
The following represents an update to the cc(1) manpage:
A new switch, -input_to_ld, has been added to the cc compiler.
This new switch allows the passing of the "-input filename" switch to ld
via cc, without changing the file's relative position in the ld command line.
Note that using the -Wl switch to do this (-Wl, -input, filename) impacts the
order in which files are presented to the linker and can result in invalid
executable being created. This is due to the cc compiler's convention of
placing all arguments passed via -Wl on the command line first, followed by
any switches or object files entered by the user on the cc command line that
are meant for ld. This convention results in the .o files specified with
-Wl, -input, filename to be included before all other .o files on the
command line, and before /usr/lib/cmplrs/cc/crt0.o, which is the transfer
point for all executables. The linker lays out the code in the order in
which it sees the input .o files, so their order on the ld command line is
important.
The cc driver interprets the -input_to_ld switch as a -input switch destined
for ld, and places it on the ld command line in the same relative position
that it had on the cc command line. This not only ensures that crt0.o is
passed to the linker first, but also preserves the linking order that the
user specified on the original cc command line.
The following sections contain reference page updates.
Add the following to the DESCRIPTION section:
When the
cron
daemon is started with the
-d
option, a trace of all jobs executed by
cron
is output to file
/var/adm/cron/log
.
Add the following to the FILES section:
/var/adm/cron/cron.deny
List of denied users
/var/adm/cron/log
History information for cron
/var/adm/cron/queuedefs
Queue description file for at, batch, and cron
Add
queuedefs(4)
to the Files: section
of RELATED INFORMATION.
queuedefs(4) queuedefs(4)
NAME
queuedefs - Queue description file for at, batch, and cron commands
DESCRIPTION
The queuedefs file describes the characteristics of the queues managed by
cron or specifies other characteristics for cron. Each non-comment line in
this file describes either one queue or a cron characteristic. Each
uncommented line should be in one of the following formats.
q.[njobj][nicen][nwaitw]
max_jobs=mjobs
log=lcode
The fields in these line are as follows:
q The name of the queue. Defined queues are as follows:
a The default queue for jobs started by at
b The default queue for jobs started by batch
c The default queue for jobs run from a crontab file
Queues d to z are also available for local use.
njob The maximum number of jobs that can be run simultaneously in the
queue; if more than njob jobs are ready to run, only the first njob
jobs will be run. The others will be initiated as currently running
jobs terminate.
nice The nice(1) value to give to all jobs in the queue that are not run
with a user ID of superuser.
nwait The number of seconds to wait before rescheduling a job that was
deferred because more than njob jobs were running in that queue, or
because the system-wide limit of jobs executing (max_jobs) has been
reached.
mjobs The maximum number of active jobs from all queues that may run at any
one time. The default is 25 jobs.
lcode Logging level of messages sent to a log file. The default is 4.
Defined levels are as follows:
level-code level
0 None
1 Low
2 Medium
3 High
4 Full
Lines beginning with # are comments, and are ignored.
EXAMPLES
The following file specifies that the b queue, for batch jobs, can have up
to 50 jobs running simultaneously; that those jobs will be run with a nice
value of 20. If a job cannot be run because too many other jobs are
running, cron will wait 60 seconds before trying again to run it. All other
queues can have up to 100 jobs running simultaneously; they will be run
with a nice value of 2, and if a job cannot be run because too many other
jobs are running, cron will wait 60 seconds before trying again to run it.
b.50j20n60w
The following file specifies that a total of 25 active jobs will be allowed
by cron over all the queues at any one time, and cron will log all messages
to the log file. The last two lines are comments that are ignored.
max_jobs=25
log=4
# This is a comment
# And so is this
FILES
/var/adm/cron
Main cron directory
/var/adm/cron/queuedefs
The default location for the queue description file.
RELATED INFORMATION
Commands: at(1), cron(8), crontab(1), nice(1)
On days when the daylight saving time (DST) changes, cron schedules
commands differently from normal.
The 2 rules described below specify cron's scheduling policy
for days when the DST changes. First some terms will be defined.
An AMBIGUOUS time refers to a clock time that occurs twice
in the same day because of a DST change (usually on a day during Fall).
A NONEXISTENT time refers to a clock time that does not occur
because of a DST change (usually on a day during Spring).
DSTSHIFT refers to the offset that is applied to standard time to
result in daylight savings time. This is normally one hour, but can be
any amount of time up to 23 hours and 59 minutes.
The TRANSITION period starts at the first second after the DST shift
occurs, and ends just before DSTSHIFT time later.
An HOURLY command has a * in the hour field of the crontab entry.
RULE 1: (AMBIGUOUS times)
-------------------------
A non-hourly command is run only once at the first occurrence
of an ambiguous clock time.
o A non-hourly command scheduled for 01:15 and 01:17
will be run at 01:15 and 01:17 EDT on 10/25/98
and will not be run at 01:15 or 01:17 EST.
An hourly command is run at all occurrences of an ambiguous time.
o An hourly command scheduled for *:15 and *:17
will be run at 01:15 and 01:17 EDT on 10/25/98
and also at 01:15 and 01:17 EST.
RULE 2: (NONEXISTENT times)
---------------------------
A command is run DSTSHIFT time after a nonexistent clock time.
If the command is already scheduled to run at the newly shifted time,
then the command is run only once at that clock time.
o A non-hourly command scheduled for 02:15 and 03:15
will be run once at 03:15 EDT on 4/5/98.
o A non-hourly command scheduled for 02:15 and 02:17
will be run once at 03:15 and once at 03:17 EDT on 4/5/98.
o An hourly command scheduled for *:15 and *:17
will be run once at 03:15 and once at 03:17 EDT on 4/5/98.
Note:
Cron's behavior during the transition period is undefined if the
DST shift crosses a day boundary, for example when the DST shift
is 23:29:29->00:30:00 and the transition period is 00:30:00->01:29:59.
-------------------------------------------------------------------------
Here are sample DST change values (for Eastern US time EST/EDT).
During the transition period, clock time may be either
nonexistent (02:00-02:59 EST in Spring)
or ambiguous (01:00-01:59 EDT or EST in Fall).
Spring (April 5, 1998):
DST shift: 01:59:59 EST
-->
03:00:00 EDT
transition period: 03:00:00 EDT
-->
03:59:59 EDT
DSTSHIFT: 1 hour forwards
Fall (Oct 25, 1998):
DST shift: 01:59:59 EDT
-->
01:00:00 EST
transition period: 01:00:00 EST
-->
01:59:59 EST
DSTSHIFT: 1 hour backwards
-------------------------------------------------------------------------
The updated reference page sections for
lpr(1)
follow:
The printer log, lpr.log now reports the creation of files preceded
by a dot (.) in the spooling directories. Do not amend or delete
these files as the printer subsystem manages their creation and
cleanup.
For initial use, DIGITAL recommends that you set the logging level
to lpr.info. If you have a problem that is escalated to technical
support, the support organization will request lpr.log at the
lpr.debug level. This is because the DEBUG messages provide a
detailed trace that can only be interpreted by reference to the
source code and lpr.log will simply grow more quickly if DEBUG
messages are logged. The lpr.info level provides a shorter report
of an event, including any network retry messages and unusual
occurences (which are not always errors).
All changes to the status file of a queue, including reports of
any files printed, are reported at the DEBUG level rather than the
INFO level. This reduces the rate of growth of the file and allows
you to monitor and react to important events more quickly. The
WARNING level logs events that may need to be attended to, while
the ERROR level logs hard (often fatal) errors.
To modify the logging level, edit your /etc/syslog.conf file and
change the lpr line to the required level, such as lpr.info as
follows:
lpr.info /var/adm/syslog.dated
Use the ps command to find the PID for the syslog daemon, and
the following command to re-start syslogd:
# kill -HUP
A new set of log files will be created in /var/adm/syslog.
Before the line discipline streams module (ldtty) closes, it sleeps for 30 seconds, waiting for the write queue to drain. In this situation, the sleep time needs to be longer. There is a kernel global variable, ldtty_drain_tmo, that specifies this time. This variable can now be patched using dbx.
# dbx -k /vmunix (dbx) print ldtty_drain_tmo 30 (dbx) patch ldtty_drain_tmo=60 60 (dbx) quit #
Some experimentation may be necessary to find the correct value for a specific customer environment.
The updated reference page sections for
mount(8)
follow:
mount(8), in the AdvFS Options section of the mount -o Flag Options:
atimes
Flushes to disk the file access time changes for reads
of regular files.
This is the default XPG4 behavior.
noatimes
Marks file access time changes for reads of regular files
in memory, but does not flush them to disk until other file
modifications occur. This behavior does not comply with
industry standards and is used to reduce disk writes for
applications with no dependencies on file access times.
read(2):
[DIGITAL] If the file is a regular file and belongs to an AdvFS
fileset mounted with the AdvFS option noatimes, the read, readv,
or pread function marks the st_atime field of the file for update.
If the file otherwise remains unchanged, the new st_atime value
is not flushed to disk. See mount(8) for more information on the
noatimes mount option.
System Configuration and Tuning Guide Appendix B Section 1, "AdvFS Subsystem
Attributes":
AdvfsPreallocAccess
AdvFS will allocate this number of access structures to the AdvFS
access structure freelist at startup. The minimum value is 128,
the maximum value is 65536. The actual value allocated at startup
will be adjusted to honor the AdvfsAccessMaxPercent configurable.
Default value: 128
On larger systems, a larger value than the default value of 128 may
improve performance by slowing the rate of access structure recycling,
allowing cached file metadata to stay in main storage.