This chapter provides information that you must be aware of when working
with DIGITAL UNIX 4.0E and TCR 1.5 Patch Kit-0004.
1.1 Required Storage Space
The following storage space is required to successfully install this
patch kit:
Base Operating System
Temporary Storage Space
A total of ~250 MB of storage space is required to untar this patch
kit.
It is recommended that this kit not be placed in the
/
,
/usr
, or
/var
file systems because this may unduly
constrain the available storage space for the patching activity.
Permanent Storage Space
Up to ~41.0 MB of storage space in
/var/adm/patch/backup
may be required for archived original files if you choose to install and
revert all patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~41.9 MB of storage space in
/var/adm/patch
may be required for original files if you choose to install and revert all
patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~653 KB of storage space is required in
/var/adm/patch/doc
for patch abstract and README documentation.
A total of ~128 KB of storage space is needed in
/usr/sbin/dupatch
for the patch management utility.
TruCluster Software products
Temporary Storage Space
A total of ~250 MB of storage space is required to untar this patch
kit.
It is recommended that this kit not be placed in the
/
,
/usr
, or
/var
file systems because this may unduly
constrain the available storage space for the patching activity.
Permanent Storage Space
Up to ~46.5 MB of storage space in
/var/adm/patch/backup
may be required for archived original files if you choose to install and
revert all patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~47.3 MB of storage space in
/var/adm/patch
may be required for original files if you choose to install and revert all
patches.
See the
Patch Kit Installation Instructions
for more information.
Up to ~732 KB of storage space is required in
/var/adm/patch/doc
for patch abstract and README documentation.
A total of ~128 KB of storage space is needed in
/usr/sbin/dupatch
for the patch management utility.
Beginning with Revision 26-02 of
dupatch
, this
patch tool utility has been enhanced to provide new features, as described
in the following sections.
For more information, see the
Patch Kit Installation Instructions.
1.2.1 Patch Installation from Multiuser Mode
Patches can now be installed when a system is in multiuser mode.
There are no restrictions on performing patch selection and preinstallation checking in multiuser mode.
However, although you can now install patches in multiuser mode, Compaq
recommends that you bring down your system to single-user mode when installing
patches that affect the operation of the Tru64 UNIX operating system (or the
product you are patching).
If your system must remain in multiuser mode, it
is recommended that you apply the patches when the system is as lightly loaded
as possible.
1.2.2 Patch Installation from a Pseudo-Terminal
Patches can now be installed on the system from a pseudo-terminal (pty)
while in single-user mode.
To do this, log into the system as root from a
remote location and specify that the patches are to be installed in single-user
mode.
Once all the patch prerequisites are completed, the system will be taken
to single-user mode while maintaining the network connection for the root
user.
The patches will then be installed by the system.
1.2.3 Automatic Kernel Build
If the patches installed indicate that a kernel build is required,
dupatch
will initiate the kernel build automatically.
Most times a reboot is required to complete the installation and bring the system to a consistent running environment. Certain file types, such as libraries, are not moved into place until you reboot the system.
When installing patches in multiuser mode, you can take one of three options after the kernel build is complete:
Reboot the system immediately.
Reboot the system at a specified time.
Forgo a system reboot.
1.3 Release Notes for Patch 590.00
This patch provides the following new features for bootable tape.
The updated
btcreate(8)
reference page sections follow:
Using the
-d
option, a user can choose
the location where the
btcreate
command creates its temporary
files.
Previously,
btcreate
was used to create its temporary
files in the
/usr
filesystem and required about 156000
blocks (512 bytes per block) of disk space in the/usr
filesystem.
Now the user has the option of using free disk space anywhere
on the system.
In the following example, the temporary files will be created at
/mnt/bt_tmp
:
# ./btcreate -d /mnt/bt_tmp
Note that the
btcreate -d
option has also been incorporated
in the interactive mode.
The ability for a user to label disks using their own
disklabel
script.
If the customized
disklabel
script is not present,
the
btextract
command will label the disks in the usual
manner.
A customized
disklabel
script has the following restrictions:
It must be located in the
/usr/lib/sabt/etc
directory.
It must be named
custom_disklabel_file
.
1.4 Release Notes for Patch 464.00
The following sections contain reference page updates.
1.4.1 Reference Page Update for cron(8)
Add the following to the DESCRIPTION section:
When the
cron
daemon is started with the
-d
option, a trace of all jobs executed by
cron
is output to file
/var/adm/cron/log
.
Add the following to the FILES section:
/var/adm/cron/cron.deny
List of denied users
/var/adm/cron/log
History information for cron
/var/adm/cron/queuedefs
Queue description file for at, batch, and cron
Add
queuedefs(4)
to the Files: section
of RELATED INFORMATION.
1.4.2 New Reference Page for queuedefs(4):
queuedefs(4) queuedefs(4)
NAME
queuedefs - Queue description file for at, batch, and cron commands
DESCRIPTION
The queuedefs file describes the characteristics of the queues managed by
cron or specifies other characteristics for cron. Each noncomment line in
this file describes either one queue or a cron characteristic. Each
uncommented line should be in one of the following formats.
q.[njobj][nicen][nwaitw]
max_jobs=mjobs
log=lcode
The fields in these line are as follows:
q The name of the queue. Defined queues are as follows:
a The default queue for jobs started by at
b The default queue for jobs started by batch
c The default queue for jobs run from a crontab file
Queues d to z are also available for local use.
njob The maximum number of jobs that can be run simultaneously in the
queue; if more than njob jobs are ready to run, only the first njob
jobs will be run. The others will be initiated as currently running
jobs terminate.
nice The nice(1) value to give to all jobs in the queue that are not run
with a user ID of superuser.
nwait The number of seconds to wait before rescheduling a job that was
deferred because more than njob jobs were running in that queue, or
because the system-wide limit of jobs executing (max_jobs) has been
reached.
mjobs The maximum number of active jobs from all queues that may run at any
one time. The default is 25 jobs.
lcode Logging level of messages sent to a log file. The default is 4.
Defined levels are as follows:
level-code level
0 None
1 Low
2 Medium
3 High
4 Full
Lines beginning with # are comments, and are ignored.
EXAMPLES
The following file specifies that the b queue, for batch jobs, can have up
to 50 jobs running simultaneously; that those jobs will be run with a nice
value of 20. If a job cannot be run because too many other jobs are
running, cron will wait 60 seconds before trying again to run it. All other
queues can have up to 100 jobs running simultaneously; they will be run
with a nice value of 2, and if a job cannot be run because too many other
jobs are running, cron will wait 60 seconds before trying again to run it.
b.50j20n60w
The following file specifies that a total of 25 active jobs will be allowed
by cron over all the queues at any one time, and cron will log all messages
to the log file. The last two lines are comments that are ignored.
max_jobs=25
log=4
# This is a comment
# And so is this
FILES
/var/adm/cron
Main cron directory
/var/adm/cron/queuedefs
The default location for the queue description file.
RELATED INFORMATION
Commands: at(1), cron(8), crontab(1), nice(1)
1.4.3 Reference Page Update for crontab(1):
On days when the daylight saving time (DST) changes, cron schedules
commands differently from normal.
The two rules described below specify cron's scheduling policy
for days when the DST changes. First some terms will be defined.
An AMBIGUOUS time refers to a clock time that occurs twice
in the same day because of a DST change (usually on a day during Fall).
A NONEXISTENT time refers to a clock time that does not occur
because of a DST change (usually on a day during Spring).
DSTSHIFT refers to the offset that is applied to standard time to
result in daylight savings time. This is normally one hour, but can be
any amount of time up to 23 hours and 59 minutes.
The TRANSITION period starts at the first second after the DST shift
occurs, and ends just before DSTSHIFT time later.
An HOURLY command has a * in the hour field of the crontab entry.
RULE 1: (AMBIGUOUS times)
-------------------------
A nonhourly command is run only once at the first occurrence
of an ambiguous clock time.
o A non-hourly command scheduled for 01:15 and 01:17
will be run at 01:15 and 01:17 EDT on 10/25/98
and will not be run at 01:15 or 01:17 EST.
An hourly command is run at all occurrences of an ambiguous time.
o An hourly command scheduled for *:15 and *:17
will be run at 01:15 and 01:17 EDT on 10/25/98
and also at 01:15 and 01:17 EST.
RULE 2: (NONEXISTENT times)
---------------------------
A command is run DSTSHIFT time after a nonexistent clock time.
If the command is already scheduled to run at the newly shifted time,
then the command is run only once at that clock time.
o A nonhourly command scheduled for 02:15 and 03:15
will be run once at 03:15 EDT on 4/5/98.
o A nonhourly command scheduled for 02:15 and 02:17
will be run once at 03:15 and once at 03:17 EDT on 4/5/98.
o An hourly command scheduled for *:15 and *:17
will be run once at 03:15 and once at 03:17 EDT on 4/5/98.
Note:
cron's behavior during the transition period is undefined if the
DST shift crosses a day boundary, for example when the DST shift
is 23:29:29->00:30:00 and the transition period is 00:30:00->01:29:59.
-------------------------------------------------------------------------
Here are sample DST change values (for Eastern US time EST/EDT).
During the transition period, clock time may be either
nonexistent (02:00-02:59 EST in Spring)
or ambiguous (01:00-01:59 EDT or EST in Fall).
Spring (April 5, 1998):
DST shift: 01:59:59 EST Ÿ 03:00:00 EDT
transition period: 03:00:00 EDT Ÿ 03:59:59 EDT
DSTSHIFT: 1 hour forwards
Fall (Oct 25, 1998):
DST shift: 01:59:59 EDT Ÿ 01:00:00 EST
transition period: 01:00:00 EST Ÿ 01:59:59 EST
DSTSHIFT: 1 hour backwards
-------------------------------------------------------------------------
1.5 Release Notes for Patch 558.00
The updated reference page sections for
lpr
(1) follow:
The printer log, lpr.log now reports the creation of files preceded
by a dot (.) in the spooling directories. Do not amend or delete
these files as the printer subsystem manages their creation and
cleanup.
For initial use, DIGITAL recommends that you set the logging level
to lpr.info. If you have a problem that is escalated to technical
support, the support organization will request lpr.log at the
lpr.debug level. This is because the DEBUG messages provide a
detailed trace that can only be interpreted by reference to the
source code and lpr.log will simply grow more quickly if DEBUG
messages are logged. The lpr.info level provides a shorter report
of an event, including any network retry messages and unusual
occurences (which are not always errors).
All changes to the status file of a queue, including reports of
any files printed, are reported at the DEBUG level rather than the
INFO level. This reduces the rate of growth of the file and allows
you to monitor and react to important events more quickly. The
WARNING level logs events that may need to be attended to, while
the ERROR level logs hard (often fatal) errors.
To modify the logging level, edit your /etc/syslog.conf file and
change the lpr line to the required level, such as lpr.info as
follows:
lpr.info /var/adm/syslog.dated
Use the ps command to find the PID for the syslog daemon, and
the following command to re-start syslogd:
# kill -HUP
A new set of log files will be created in /var/adm/syslog.
As previously mentioned, this patch provides support to the BSD
lpd
(8) print system for Compaq's Advanced Printing Software (APX).
This patch allows
lpr
(1) print jobs to be submitted to
the Advanced Print System (APX).
1.6 Release Notes for Patches 534.00 and 321.00
This release notes describes enhanced performance for multithreaded
applications for the
malloc
command.
To make optimum use of the
malloc
tuning features
for performance-sensitive applications, the developer needs to consult
the Tuning Memory Allocation section of the
malloc
(3) reference
page.
In addition, three new tuning variables which are particularly important
to multithreaded applications are added by this patch.
They are described
in the following sections.
1.6.1 int __delayed_free = 2;
The variable
__delayed_free
is used to cause the
free()
function to use a "delay slot" (of size one).
This means
that any time you call
free
, it saves your pointer and
actually frees what you last called
free
with.
This is
intended to avoid misuse of
realloc
, where the user frees
a pointer and then calls
realloc
with it.
Since the delay
slot is shared across all threads, this will not provide reliable protection
for multithreaded applications.
It also means that it is accessed internally
with atomic instruction sequences which can create a bottleneck on multi-CPU
systems.
A value of 1 means only delay frees for single-threaded applications.
A value of 2 means delay for both single and multithreaded applications.
A value of 0 turns this feature off for both classes of applications.
All
other values cause undefined behavior.
It is recommended that all multithreaded
applications try to use a value of 1.
The default value of 2 will change
to 1 in a future release.
1.6.2 int __first_fit = 0;
The variable
__first_fit
is currently intended only
for performance-critical multithreaded applications.
It should not be
used with single-threaded applications.
Its value is used to allow
malloc
and
amalloc
to skip up to a larger internal
cache list if the optimum node size list is found to be in use by another
thread.
The allowed values are 0, 1, and 2.
Do not use any other value.
A value of 0 disables this feature.
A value of 1 allows the next larger
list to be used, and a value of 2 allows the next list after that to also
be used (three lists in total).
Increasing the value of
__first_fit
can increase both execution speed and memory consumption of multithreaded
applications making heavy concurrent use of either
malloc
functions or the same arena with
amalloc
functions.
1.6.3 int __max_cache = 15;
The
__max_cache
variable suggests the number of internal
cache (lookaside) lists to be used by
malloc
and
amalloc
.
Each list contains blocks within the same size range.
A larger value of
__max_cache
causes the internal caching
of larger sized blocks.
The currently allowable values for this variable
are 15, 18, 21, 24, and 27.
Do not use any other value.
The given values correspond
to lists containing nodes up to 632, 1272, 2552, 5112, and 10232 bytes in
size, respectively.
The maximum length of the lists is determined by the
__fast_free_max
variable.
Application requests for storage that can be satisfied from a node on
a cache list typically happen somewhat faster than those that cannot.
Increasing
the value of this variable can increase both the execution speed and the memory
consumption of an application that allocates nodes in the given size range.
1.7 Release Notes for Patches 320.00 and 321.00
This release note contains the new reference page for
amalloc
(3).
A new set of memory allocator functions, collectively known as
arena malloc
, has been added in this patch.
The reference page
follows:
amalloc(3) amalloc(3)
NAME
acalloc, acreate, adelete, afree, amallinfo, amalloc, amallopt,
amallocblk-size, arealloc - arena memory allocator
LIBRARY
Standard C Library (libc.so, libc.a)
SYNOPSIS
#include #include
void *acreate (
void *addr, size_t len, int flags, void *ushdr,
void *(*grow_func)(size_t, void *));
int adelete (void *ap);
void *amalloc (
size_t size, void *ap);
void afree (
void *ptr, void *ap);
void *arealloc (
void *ptr, size_t size, void *ap);
void *acalloc (
size_t nelem, size_t elsize, void *ap);
size_t amallocblksize (
void *ptr, void *ap);
The following function definitions are provided only for System V
compatibility:
int amallopt (
int cmd, int value, void *ap);
struct mallinfo amallinfo (
void *ap);
DESCRIPTION
The amalloc family of routines provides a main memory allocator based on
the malloc(3) memory allocator. This allocator has been extended so that
an arbitrary memory space ("arena") can be set up as an area from which
to allocate memory.
Calls to the amalloc family of routines differ from calls to the standard
malloc(3) only in that an arena pointer must be supplied. This arena pointer
is returned by a call to acreate.
acreate
Sets up an area defined as starting at virtual address addr and extending
for len bytes. Arenas can be either growing or nongrowing.
An arena that is nongrowing is constrained to use only up to len bytes
of memory. The grow_func parameter should be NULL in this case.
If the arena is "growable", len specifies the original size (minimum of
1K bytes) and the grow_func parameter specifies a function that will be
called when the allocator requires more memory. Note that the original
buffer addr will be used only for the arena header; the first time more
memory is required, the "grow" function will be called. This suggests
that a minimal (1K) original buffer should be used when setting up a
growable arena.
The grow function will be called with two parameters: the number of bytes
required and a pointer to the arena requiring the space. The number of
bytes requested will always be a multiple of M_BLKSZ (see header file).
The function should return the address of a suitably
large block of memory. This block does not need to be contiguous with
the original arena memory. This block could be obtained from a number of
sources, such as by mapping in another file (by means of mmap(2)) or by
calling malloc(3) to enlarge the program's data space. If the grow
function decides that it cannot provide any more space, it must return
(void*)-1.
The ushdr function is currently unused and must be NULL.
adelete
Causes any resources allocated for the arena (for example, mutexes) to be
freed. Nothing is done with the arena memory itself. No additional calls
to any arena functions can be made after calling adelete.
amalloc
Returns a pointer to a block of at least size bytes suitably aligned
for any use.
afree
Destroys the contents of a block previously allocated by amalloc,
arealloc, or acalloc and makes this space available for future
allocation. The argument to afree is a pointer to the block previously
allocated by amalloc, arealloc, or acalloc.
Undefined results will occur if the space assigned by any of the three
arena allocator functions is overrun or if some random number is handed
to afree. It is always permitted to pass NULL to afree.
arealloc
Changes the size of the block pointed to by ptr to size bytes and
returns a pointer to the (possibly moved) block. The contents will
be unchanged, up to the lesser of the new and old sizes. In the special
case of a null ptr, arealloc degenerates to amalloc. A zero size causes
the passed block to be freed.
acalloc
Allocates space for an array of nelem elements of size elsize. The space
is initialized to zeros.
amallocblksize
Returns the actual size of the block pointed to by ptr. The returned
size may be greater than the original requested size.
amallopt
Provides for control over the allocation algorithm. The available
values for cmd are defined in the header file.
The amallopt function can be called repeatedly but, for most commands,
not after the first small block is allocated.
amallinfo
Provides instrumentation describing space usage. It returns the mallinfo
structure defined in the header file. The structure is zero
until after the first space has been allocated from the arena.
Each of the allocation routines returns a pointer to space suitably aligned
for storage of any type of object.
RETURN VALUES
The acreate function returns NULL and sets errno if either len is less than
1K or the MEM_SHARED flag is passed.
The amalloc, arealloc, and acalloc functions return a NULL pointer if there
is not enough available memory. When arealloc returns NULL, the block
pointed to by ptr is left intact. If amallopt is called after any
allocation (for most cmd arguments) or if cmd or value is invalid, nonzero
is returned. Otherwise, it returns zero.
RELATED INFORMATION
Function: malloc(3)
1.8 Release Notes for Patch 591.00
This release note discusses the I/O Throttling/Smooth Sync patch.
Note
In order to activate I/O Throttling/Smooth Sync, you must also install Patch 568.00.
Note
Smooth Sync is for UNIX File System (UFS) only.
Update your
/etc/fstab
entries to enable the selected
mount options on the selected UFS filesystems.
The new options are
smsync2
and
throttle
.
The
smsync2
enables an alternate
smsync
policy in which dirty pages do not get flushed until they
have been dirty and idle for the
smoothsync
age period
(default 30 seconds).
The default policy is to flush dirty pages after being
dirty for the
smoothsync
age period, regardless of continued
modifications to the page.
Note that
mmap
ed pages always
use this default policy, regardless of the
smsync2
setting.
For example, change from:
/dev/rz12e /mnt/test ufs rw 0 2
to:
/dev/rz12e /mnt/test ufs rw,smsync2,throttle 0 2
Note
If you choose not to use
smsync2
(which does not affect memory-mapped buffers), remove thesmsync2
option from the previous string.
Append to /etc/sysconfigtab
any tuning changes.
See the TUNING notes that follow for a description of the new
io-throttle-shift
and
io-throttle-maxmzthruput
tunables.
These
tunables are configured in the
vfs
stanza.
The following
three lines make up an example:
vfs:
io-throttle-shift = 1
io-throttle-maxmzthruput = 1
Note
If you already have a
vfs
stanza in yoursysconfigtab
file, then just add the twoio-throttle
entries.
To remove this patch, follow these steps:
Edit the
/etc/inittab
and remove the following
smsync
lines:
smsync:23:wait:/sbin/sysconfig -r vfs smoothsync-age=30 >
/dev/null
2>&1
smsyncS:Ss:wait:/sbin/sysconfig -r vfs smoothsync-age=0 >
/dev/null
2>&1
Remove any additions to
/etc/fstab
you
may have made (see the previous instructions).
Failure to remove
/etc/inittab
and
/etc/fstab
modifications may result in unknown attribute messages, particularly
upon system reboot.
TUNING
The purpose of this patch is to minimize system stalls resulting from
a heavy system I/O load.
This patch introduces a
smoothsync
approach to writing delayed I/O requests and introduces I/O throttling.
Using
smoothsync
allows each dirty page to age for
a specified time period before getting pushed to disk.
This allows more opportunity
for frequently modified pages to be found in the cache, which decreases the
net I/O load.
Also, as pages are enqueued to a device after having aged sufficiently,
as opposed to getting flushed by the
update
daemon, spikes
in which large numbers of dirty pages are locked on the device queue are minimized.
I/O throttling further addresses the concern of locking dirty pages on the device queue. It enforces a limit on the number of delayed I/O requests allowed to be on the device queue at any point in time. This allows the system to be more responsive to any synchronous requests added to the device queue, such as a read or the loading of a new program into memory. This may decrease the duration of process stalls for specific dirty buffers, as pages remain available until placed on the device queue.
The relevant tunable variables are as follows:
smoothsync-age
You can adjust this variable from 0 (off) up to 300.
This is the number
of seconds a page ages before becoming eligible for being flushed to disk
via the smoothsync mechanism.
A value of 30 corresponds to the guarantee
provided by the traditional UNIX update mechanism.
Increasing this value
increases the exposure of lost data should the system crash, but can decrease
net I/O load (to improve performance) by allowing the dirty data to remain
in cache longer.
In some environments, any data that is not up to date is
useless; these are prime candidates for an increased
smoothsync-age
value.
The default value of
smoothsync-age
is 30.
io-throttle-shift
The greater the number of requests on an I/O device queue, the longer
the time required to process those requests and make those pages and device
available.
The number of concurrent delayed I/O requests on an I/O device
queue can be throttled by setting the
io-throttle-shift
tunable.
The throttle value is based on this tunable and the calculated I/O
completion rate.
The throttle value is proportional to the time required
to process the I/O device queue.
The correspondences between
io-throttle-shift
values and the time to process the device queue are:
io-throttle-shift time to process device queue (sec)
-------------------------------------------------------------------
-2 0.25
-1 0.5
0 1
1 2
2 4
For example, an
io-throttle-shift
value of 0 corresponds
to accommodating 1 second of I/O requests.
The valid range for this tunable
is [-4..4] (not all values are shown in the previous table; you can extrapolate).
The default value of
io-throttle-shift
is 1.
Environments
particularly sensitive to delays in accessing the I/O device might consider
reducing the
io-throttle-shift
value.
io-maxmzthruput
This is a toggle that trades off maximizing I/O throughput against maximizing the availability of dirty pages. Maximizing I/O throughput works more aggressively to keep the device busy, but within the constraints of the throttle. Maximizing the availability of dirty pages is more aggressive at decreasing stall time experienced when waiting for dirty pages.
The environment in which you might consider setting
io-maxmzthruput
off
(0) is one in which I/O is confined to a small number of I/O-intensive
applications, such that access to a specific set of pages becomes more important
for overall performance than does keeping the I/O device busy.
The default
value of
io-maxmzthruput
is 1.
Environments particularly
sensitive to delays in accessing sets of frequently used dirty pages might
consider setting
io-maxmzthruput
to 0.
1.9 Release Notes for Patch 534.00
The following release notes provide updated information for the
quotacheck
(8),
fsck
(8), and
fstab
(4)
reference pages.
quotacheck(8) Reference Page Update
SYNOPSIS
/usr/sbin/quotacheck [-guv] filesystem ...
OLD> /usr/sbin/quotacheck -a [-guv] [-l number]
NEW> /usr/sbin/quotacheck -a [-guv] [-l number] [-t [no]type]
FLAGS
OLD> -a Checks all file systems identified in the /etc/fstab file
as read/write with disk quotas.
NEW> -a Checks all UFS and AdvFS file systems identified in the
/etc/fstab file as read/write with userquota and/or
groupquota options specified, and a pass number of 1 or
greater. If the -t option is specified, only the file systems
of the specified type will be checked. Alternatively, if
type is prefixed with 'no', then the valid file systems in
the /etc/fstab file that do not have that type will be
checked.
OLD> -l number Specifies the number of times to perform disk quota
checking.
NEW> -l number Specifies the maximum number of parallel quotacheck
processes to run at one time.
NEW> -t [no]type
NEW> Specifies the file system type. The supported file systems are
as follows:
advfs - Advanced File System (AdvFS)
ufs - UNIX File System (UFS)
See fstab(4) for a description of file system types. If
the 'no' prefix is used, all of the previous file types
except the one specified are checked.
Note, the -t flag is only valid when used with the -a flag.
DESCRIPTION
OLD> The quotacheck command examines each specified file system, builds a
table of current disk usage, and compares this table against that
stored in the disk quota file for the file system. If any
inconsistencies are detected, both the quota file and the current
system copy of the incorrect quotas are updated. Each file system
must be mounted with quotas enabled.
NEW> The quotacheck command examines each specified file system, builds a
table of current disk usage, and compares this table against that
stored in the disk quota file for the file system. If any
inconsistencies are detected, both the quota file and the current
system copy of the incorrect quotas are updated.
OLD> The quotacheck command runs parallel passes on file systems using
the number specified in the fsck field of the file system's entry in
the /etc/fstab file. The quotacheck command only checks file
systems with pass number 1 or higher in the fsck field. A file
system with no pass number is not checked.
NEW> The quotacheck -a command runs parallel passes on file systems using
the number specified in the /etc/fstab pass number field. The
quotacheck command only checks file systems with pass number 1 or
higher in the fsck field. A file system with no pass number is
not checked.
OLD> For both UFS file systems and AdvFS filesets, you should assign the
root file system a fsck field value of 1, and a value of 2 or
higher to other file systems. See fstab(4) for more information.
NEW> For both UFS file systems and AdvFS filesets, you should assign the
root file system a pass number of 1, and a value of 2 or higher
to other file systems. See fstab(4) for more information.
OLD> The quotacheck command checks only file systems that have the
userquota or groupquota option specified in the /etc/fstab file.
NEW> The quotacheck command checks only file systems that are mounted.
UFS file systems must also have userquota and/or groupquota options
specified in the /etc/fstab file. The userquota and groupquota
options are only needed for AdvFS file systems if quotas are
actually going to be enforced or if they are to be selected with the
-a option.
fsck(8) Reference Page Update
OLD> When the system boots, the fsck program is automatically
run with the -p flag. The program reads the /etc/fstab file to
determine which file systems to check. Only partitions that
are specified in the fstab file as being mounted ``rw'' or
``ro'' and that have a nonzero pass number are checked.
File systems that have a pass number 1
(usually only the root file system) are checked one at a time.
When pass 1 completes, all the remaining file systems are
checked, with one process running per disk drive.
NEW> When the system boots, the fsck program is automatically
run with the -p flag. The program reads the /etc/fstab file to
determine which file systems to check. Only partitions that
are specified in the fstab file as being mounted ``rw'' or
``ro'' and that have a nonzero pass number are checked.
File systems that have a pass number 1
(usually only the root file system) are checked one at a time.
When pass 1 completes, the remaining pass numbers are processed
with one parallel fsck process running per disk drive in the
same pass.
NEW> The per disk drive logic is based on the /dev/disk/dsk0a
syntax where different partition letters are treated as being
on the same disk drive. Partitions layered on top of an LSM
device may not follow this naming convention. In this case
unique pass numbers in /etc/fstab may be used to sequence fsck
checks.
fstab(4) Reference Page Update
userquota [=filename] and groupquota [=filename]
If quotas are to be enforced for users or groups,
one or both of the options must be specified. If
userquota is specified, user quotas are to be enforced.
If groupquota is specified, group:
OLD> quotas are to be enforced.
NEW> quotas are to be enforced (see quotaon and quotaoff(8)).
OLD> For UFS file systems, the sixth field, (fsck), is used by
the fsck command to determine the order in which file system
checks are done at reboot time. For the root file system,
specify 1 in the fsck field. For other UFS file systems,
specify 2 or higher in the fsck field. Each UFS file system
should have a unique fsck value.
NEW> For UFS file systems, the sixth field, (pass number), is
used by the fsck and quotacheck commands to determine the
order in which file system checks are done at reboot time.
For the root file system, specify 1 in the fsck field. For
other UFS file systems specify 2 or higher in the pass number
field.
OLD> For AdvFS filesets, the sixth field is a pass number
field that allows the quotacheck command to perform all of the
consistency checks needed for the fileset. For the root file
system, specify 1 in the fsck field. Each AdvFS fileset in
an AdvFS file domain should have a unique fsck value, which
should be 2 or higher.
NEW> For AdvFS filesets, the sixth field is a pass number
field that allows the quotacheck command to perform all of the
consistency checks needed for the fileset. For the root file
system, specify 1 in the fsck field. For other AdvFS file
systems specify 2 or higher in the pass number field.
OLD> File systems that are on the same disk are checked
sequentially, but file systems on different disks are
checked at the same time to utilize parallelism available
in the hardware. If the sixth field is not present or zero,
a value of 0 is returned and the fsck command
assumes that the file system does not need to be checked.
NEW> File systems that are on the same disk or domain are checked
sequentially, but file systems on different disks or
domains but with the same or greater than 1 pass number are
checked at the same time to utilize parallelism available in
the hardware. When all the file systems in a pass have
completed their checks, then the file systems with the
numerically next higher pass number will be processed.
NEW> The UFS per disk drive logic is based on the
/dev/disk/dsk0a syntax where different partition letters
are treated as being on the same disk drive. Partitions
layered on top of an LSM device may not follow this naming
convention. In this case unique pass numbers may be used
to sequence fsck and quotacheck processing. If the sixth
field is not present or 0, a value of 0 is returned
and the fsck command assumes that the file system does
not need to be checked.
1.10 Release Notes for Patch 581.00
If the system configurable parameter,
lsm:lsm_V_ROUND_enhanced
is set (value = 1), the enhanced read round robin policy is activated.
This new policy stores the last block accessed by the previous IO request.
When returning for another block in round robin (V_ROUND
)
mode, that value is compared to the current read.
If it is within a predefined,
user-configurable value (lsm:lsm_V_ROUND_enhance_proximity
),
then the same plex is used.
Otherwise the next plex is used as for a normal
round robin behavior.
The two new additional tunable parameters are
lsm_V_ROUND_enhanced
set to 1 by default (V_ROUND
read is activated)
and
lsm_V_ROUND_enhance_proximity
is set to 512 by default.
Append any tuning changes to
/etc/sysconfigtab
.
Refer to the TUNING notes below for a description of the new
lsm_V_ROUND_enhanced
and
lsm_V_ROUND_enhance_proximity
tunables.
These tunables are configured in the
lsm
stanza.
The
following three lines are an example:
lsm: lsm_V_ROUND_enhanced = 1 lsm_V_ROUND_enhance_proximity = 1024
Note
If you already have an
lsm
stanza in yoursysconfigtab
file, then just add the twolsm_V_ROUND
entries.
TUNING
The purpose of this patch is to increase performance with sequential
reads.
This patch introduces a new enhanced round robin mode where the last
block read is now compared to the next block to read, and a check is added
to see if last block number-next block number is less than or equal
to
lsm_V_ROUND_enhance
_proximity.
If it is, read from
the same plex.
This attempts to hit the disk cache, thereby increasing performance.
The relevant tunable variables are:
DEFAULT = 1
DEFAULT = 512
lsm_V_ROUND_enhanced
This variable activates the new enhanced round robin read policy if it is
set to TRUE (1).
Otherwise the policy is deactivated.
lsm_V_ROUND_enhance_proximity
This variable provides the proximity in which the last read and new read most
lie in an attempt to read data from the disk's cache by reading from the same
plex.
The variable can be adjusted from 0 to 4096.