This chapter introduces file systems and the basic system administration tasks related to file systems. Several file systems are supported, but the Advanced File System (AdvFS) and UNIX File System (UFS) are the principal file systems used by applications and the components of the UNIX operating system. If your system was delivered with the operating system already installed, you will find that AdvFS is configured as the default file system. Consult the AdvFS Administration guide for information on administering AdvFS.
If you installed the operating system yourself, you may have opted to create one or more UFS file systems. Even if your system arrived configured for AdvFS, you can still create UFS file systems. Both file systems can coexist on a system and many administrators opt to use the familiar UFS file system on system disks or in instances where the advanced features of AdvFS are not required. This chapter discusses system administration tasks related to the following file system topics:
Section 6.1 provides an introduction to the file systems that are available.
Section 6.2 describes Context-Dependent Symbolic Links (CDSLs), which facilitate the joining of systems into clusters.
Section 6.3 describes how you create UFS file systems manually, using the command line.
Section 6.4 describes how you create UFS file systems using the SysMan Menu tasks.
Section 6.5 describes how you control UFS file system resources by assigning quotas to users.
Section 6.6 provides pointers to methods of backing up UFS file systems.
Section 6.7 briefly describes features for monitoring and tuning file systems.
Section 6.8 provides information for troubleshooting UFS file system problems.
There are several other sources of information about system administration
tasks and file systems.
This chapter directs you to those sources when appropriate.
6.1 Introduction to File Systems
The UNIX operating system supports current versions of several file systems, including:
Advanced File System (AdvFS).
This file system has its own documentation and
advanced interfaces.
Refer to the
AdvFS Administration
guide and the
advfs
(4)
reference page for more information.
There are advanced administrative utilities
available for AdvFS.
When these utilities are available, there will be a launch
icon named Advanced File System in the CDE Application Manager - Storage_Management
folder.
Consult the AdvFS documentation for information on installing and
using the advanced administrative utilities.
Basic AdvFS utilities are provided as SysMan Menu tasks. Refer to Chapter 1 for information on accessing these tasks. There is online help for the utilities provided by SysMan.
UNIX File System (UFS), documented in this chapter.
See also the
ufs_fsck
(8),
sys_attrs_ufs
(5), and
tunefs
(8)
reference page
for information on attributes and utilities.
ISO 9660 Compact Disk File System (CDFS).
Refer to the
cdfs
(4)
reference page for information.
Memory File System (mfs).
Refer to the
newfs
(8)
reference page for information on mfs.
File on File Mounting file system (ffm).
Refer to the
ffm
(4)
reference page for information on mfs.
You may also need to refer to the following volumes:
The Logical Storage Manager guide for information about using the Logical Storage Manager (LSM) with both the AdvFS and UFS file systems.
The AdvFS Administration guide for information on converting file systems from UFS to AdvFS, and from AdvFS to UFS.
The System Configuration and Tuning guide for information on advanced UFS file system tuning.
The rest of this section, and following sections, introduce concepts that are important in the context of creating and administering file systems. The information is not essential for basic file system creation and administration, but may be useful if you plan to perform advanced operations or perform troubleshooting tasks.
The following list provides a brief overview of the topics, with detailed information in the sections that follow:
Any file system, whether local or remotely mounted,
is part of the total directory hierarchy of a system or cluster.
It can be
considered as a tree, growing from the root file system (/) and branching
as additional directories are added to the basic system hierarchy.
When you
create a UFS file system, such as
/usr/usrs/projects
, you
add it as a new branch on the hierarchy, under the existing
/usr/usrs
branch.
The common form of file system storage on all systems is a hard
disk.
The administration of such devices is described in
Chapter 5.
A disk is divided into logical partitions, which may be the whole disc (partition
c
) or parts of the disk, such as partitions
a
through
h
.
Depending on the size of the disk, the partitions
vary in size, and are usually expressed in megabytes (MB).
When you initially
create a file system, you create it on a disk partition and thus assign a
finite amount of size (disk space) to that file system.
Increasing the size
of a UFS file system may involve moving it to a bigger partition or disk.
A file system has an on disk data structure that describes the layout of data on the physical media. You may need to know this structure to troubleshoot the file system or perform advanced operations such as tuning. For most common operations, you will not need to know this information in detail. Reference information is provided in the following sections.
The various directory and file types will be displayed in the output of common commands that you use. Reference information is provided so that you can identify file types such as symbolic links or sockets. For more detailed information, invoke the appropriate reference page as follows:
Regular files - Refer to the
file
(1)
reference page.
Directories - Refer to the
ls
(1)
and
dir
(1)
reference pages.
Device Special Files - Refer to Chapter 5.
Sockets - Refer to the
socket
(2)
reference page, the
Network Administration
guide, and the
Network Programmer's Guide.
Pipes -
Refer to the
pipe
(2)
reference page.
Symbolic links - Refer to the
link
(1)
and
ln
(1)
reference pages
6.1.1 Directory Hierarchy for File Systems
The location of file
systems is based on the UNIX directory hierarchy, beginning with a root (/
) directory.
The file systems that you create become usable (or
active) when they are mounted on a mount point in the directory hierarchy.
For example, during installation of the operating system, you may have created
the
usr
file system (as UFS), which is then automatically
mounted on root (/
) and has a pathname of
/usr
in the hierarchy.
The standard system directory hierarchy is set up for efficient organization.
It separates files by function and intended use.
Effective use of file systems
includes placing command files in directories that are in the normal search
path as specified by a user's setup file, such as
.cshrc
,
.profile
, or
.login
.
Some of the directories
are actually symbolic links.
See the
hier
(5)
reference page for more information
about the operating system's directory hierarchy, including the hierarchy
of the X11 Windows System.
Mounting a file system makes it available for use.
Use the
mount
command to attach (or mount) file systems to the file system
hierarchy under the system root directory; use the
umount
command to detach (or unmount) them.
When you mount a file system, you specify
a location (the mount point under the system root directory) to which the
file system will attach.
See
mount
(8)
for more information about mounting
and unmounting file systems.
The root directory of a mounted file system is also its mount point.
Only one system root directory can exist on a system, because it uses the
root directory as its source for system initialization files.
Consequently,
all file systems that are local to an operating system are mounted under that
system's root directory.
6.1.2 Disk Partitions
A disk consists of physical storage units called sectors. Each sector is usually 512 bytes. A sector is addressed by the logical block number (LBN), which is the basic unit of the disk's user-accessible data area that you can address. The first LBN is numbered 0, and the highest LBN is numbered one less than the number of LBNs in the user-accessible area of the disk.
Sectors
are grouped together to form up to eight disk partitions.
However, disks differ
in the number and size of partitions.
The
/etc/disktab
file contains a list of supported disks and the default partition sizes for
the system.
Refer to
disktab
(4)
for more information.
Disk
partitions are logical divisions of a disk that allow you to organize files
by putting them into separate areas of varying sizes.
Partitions hold data
in structures called file systems and can also be used for system operations
such as paging and swapping.
File systems have a hierarchical structure of
directories and files, as shown in
hier
(5).
Disk partitions
have default sizes that depend on the type of disk and that can be altered
by using the
disklabel
command or the
diskconfig
graphical user interface.
Partitions are named
a
to
h
.
While it is possible for you to make the allocated
space for a partition overlap another partition, the default partitions are
never overlapping, and a properly used disk must not have file systems on
overlapping partitions.
For example, the following example shows the default partitioning for a model RZ1DF-CB disk, using the following command:
#
disklabel -r /dev/rdisk/dsk0a
Note
that only the disk table part of the output is shown here.
Also listed is
an example of and HSZ RAID disk, taken from the
rz
(7)
reference page.
Example 6-1: Default Partitions for RZ1DF-CB Disk and HSZ RAID Devices
(RZ1DF-CB Disk)
8 partitions:
# size offset fstype [fsize bsize cpg] # NOTE: values not exact
a: 262144 0 4.2BSD 1024 8192 16 # (Cyl. 0 - 78*)
b: 1048576 262144 swap # (Cyl. 78*- 390*)
c: 17773524 0 unused 0 0 # (Cyl. 0 - 5289*)
d: 1048576 1310720 swap # (Cyl. 390*- 702*)
e: 9664482 2359296 AdvFS # (Cyl. 702*- 3578*)
f: 5749746 12023778 unused 0 0 # (Cyl. 3578*- 5289*)
g: 1433600 524288 unused 0 0 # (Cyl. 156*- 582*)
h: 15815636 1957888 unused 0 0 # (Cyl. 582*- 5289*)
HSZ10, HSZ40, HSZ50, HSZ70 (RAID) Partitions
Disk Start Length
dsk?a 0 131072
dsk?b 131072 262144
dsk?c 0 end of media
dsk?d 0 0
dsk?e 0 0
dsk?f 0 0
dsk?g 393216 end of media
dsk?h 0 0
The disk label is located in block 0 (zero) in one of the first sectors of the disk. The disk label provides detailed information about the geometry of the disk and the partitions into which the disk is divided. The system disk driver and the boot program use the disk label information to recognize the drive, the disk partitions, and the file systems. Other information is used by the operating system to use the disk most efficiently and to locate important file system information.
The disk label description of each partition contains an identifier
for the partition type (for example, standard file system, swap space, and
so on).
There are two copies of a disk label, one located on the disk and
one located in system memory.
Because it is faster to access system memory
than to perform I/O, when a system recognizes a disk, it copies the disk label
into memory.
The file system updates the in-memory copy of the label if it
contains incomplete information about the file system.
You can change the
label with the
disklabel
command.
Refer to
disklabel
(8)
for more information on the command-line interface.
Refer to
Chapter 5
for information on the disk configuration utility
diskconfig
.
6.1.3 UFS Version 4.0
The version of UFS that is currently provided is at revision 4.0. This version has the same on-disk data layout as UFS Version 3.0, as described in Section 6.1.4 but has larger capacities.
Version 4.0 supports 65533 hard links or subdirectories while Version
3.0 supports 32767 hard links or subdirectories.
The actual number of directories
is 65531 (64k) and 32765 (32k), because the empty directory already has two
hard links to itself and to its parent directory.
When you use the
ls -a
command, these links are displayed as
.
and
..
.
In the remainder of this section, the examples
all refer to 32k subdirectories although the information also applies to files
having 32k or more hard links.
There are some considerations and important restrictions that you should take into account, particularly when using both versions, as follows:
newfs
or
diskconfig
to create
file systemsWhen you create new file systems using
newfs
or
diskconfig
, the new file systems are
always created as Version 3.0 (32k subdirectories or hard links) to minimize
any incompatability problems.
fsck
to check file systemsWhen you use
fsck
to check a dirty file system (such
as one not unmounted normally, or perhaps after a system crash), the file
system will be marked as either Version 3.0 or Version 4.0, depending on the
maximum number of subdirectories found.
If
fsck
finds a
directory with more than 32k subdirectories, the file system will be marked
as Version 4.0.
Otherwise, if
fsck
does not find a directory
with more than 32k hard links, the file system will be marked as Version 3.0.
A file system will normally be converted to Version 4.0 as soon as the 32k
subdirectory limit is exceeded by a user.
A new
fsck
otion,
-B, has been added.
This option enables you to convert Version 4.0 file systems back to Version
3.0.
When you use this option,
fsck
make the conversion
only if no directory in the file system has more than 32k subdirectories and
no file has more than 32k hard links.
The following important restrictions apply when using both Version 3.0 and Version 4.0 of UFS on systems that are running previous versions of the operating system (such as V4.0F):
Do not run previous versions of
fsck
using
the
-p
or
-y
options on a Version 4.0
file system unless you are certain that that there are no directories that
have more than 32k subdirectories.
If you attempt to do this, any directories
that have more than 32k subdirectories will be permenently deleted from the
file system.
Do not list directories with more than 32k subdirectories
in the root (/
) and
/usr
partitions
(or other UFS partitions) in the
/etc/fstab
file.
At boot
time
fsck -p
runs automatically on all file systems listed
in
/etc/fstab
.
As a protection against this, Version 4.0 creates a mismatch between
the main superblock and alternate superblocks so that old versions of
fsck -p
cannot be run on a Version 4.0 file system.
The first time
you attempt to run the old version of
fsck -p
on a Version
4.0 file system that has more than 32k subdirectories, it will fail because
of a superblock mismatch with alternate superblocks.
When you are prompted
to specify an alternate superblock, always respond
n
.
Even
if you inadvertantly enter
y
, the Version 4.0 file system
will remain untouched, providing you do not enter
y
when
the following prompt is displayed:
CLEAR? [yn]
At this time, you can correct the
FREE
BLK COUNT
and the
UPDATE STANDARD SUPERBLOCK
if required.
However, the second time you run
fsck -p
on a Version 4.0 file system, this mismatch protection will not exist.
Any
directories with more than 32k subdirectories will be permanently deleted.
As there are no on-disk data layout differences between the two releases of UFS, you can mount any legacy Version 3.0 file systems on the latest release of UNIX. If you attempt to create more than 32k hard links on a Version 3.0 file system, it will be automatically converted to Version 4.0. The following example system message will be displayed during conversion:
Marking /dev/disk/dsk023 as Tru64 UNIX UFS v.4
If a you want to share or mount a Version 4.0 file system that does not have more than 32k subdirectories, you can mount it on a system that is running a previous version of the operating system that supports only Version 3.0, such as Tru64 UNIX Version 4.0F. However, you must first convert the file system from Version 4.0 as follows:
On the system that supports Version 3.0, use the
fsck
command on the file system partition, as shown in the following
example:
#
fsck /dev/rrz03
On the system that supports Version 4.0, use the
fsck
command on the file system partition, as shown in the following
example:
#
fsck -B /dev/disk/dsk34d
6.1.4 File System Structures: UFS
This section discusses the structure of the UFS. The structure of the AdvFS is discussed in AdvFS Administration.
A UFS file system has four major parts:
The first block of every file system (block 0) is reserved for a boot, or initialization, program.
Block 1 of every file system is called the superblock and contains the following information:
Total size of the file system (in blocks)
Name of the file system
Device identification
Date of the last superblock update
Head of the free-block list, which contains all of the free blocks (the blocks available for allocation) in the file system
When new blocks are allocated to a file, they are obtained from the free-block list. When a file is deleted, its blocks are returned to the free-block list.
List of free inodes, which is the partial listing of inodes available to be allocated to newly created files
A group of blocks follows the superblock. Each of these blocks contains a number of inodes. Each inode has an associated inumber. An inode describes an individual file in the file system. There is one inode for each file in the file system. File systems have a maximum number of inodes; therefore there is a maximum number of files that a file system can contain. The maximum number of inodes depends on the size of the file system.
The first inode (inode 1) on each file system is unnamed and unused. The second inode (inode 2) must correspond to the root directory for the file system. All other files in the file system are under the file system's root directory. After inode 2, you can assign any inode to any file. You can also assign any data block to any file. The inodes and blocks are not allocated in any particular order.
If an inode is assigned to a file, the inode can contain the following information:
File type
The possible types are regular, device, named pipes, socket, and symbolic link files.
File owner
The inode contains the user and group identification numbers that are associated with the owner of the file.
Protection information
Protection information
specifies read, write, and execute access for the file owner, members of the
group associated with the file, and others.
The protection information also
includes other mode information specified by the
chmod
command.
Link count
A directory entry (link) consists of a name and the inumber (inode number) that represents the file. The link count indicates the number of directory entries that refer to the file. A file is deleted if the link count is zero; the file's inode is returned to the list of free inodes, and its associated data blocks are returned to the free-block list.
Size of the file in bytes
Last file access date
Last file modification date
Last inode modification date
Pointers to data blocks
These pointers indicate the actual location of the data blocks on the physical disk.
Data blocks
6.1.5 Directories and File Types
The operating system views files as bit streams, allowing you to define and handle on-disk data, named pipes, UNIX domain sockets, and terminals as files. This object-type transparency provides a simple mechanism for defining and working with a wide variety of storage and communication facilities. The operating system handles the various levels of abstraction as it organizes and manages its internal activities.
While you notice only the external interface, you should understand the various file types recognized by the system. The system supports the following file types:
Regular files contain data in the form of a program, a text file, or source code, for example.
Directories are a type of regular file and contain the names of files or other directories.
Character and block device special files identify physical and pseudodevices on the system.
UNIX domain socket files provide a connection between network
processes.
The
socket
system call creates socket files.
Named pipes are device files that processes use to communicate with each other.
Linked files point to target files or directories. A linked file contains the name of the target file. A symbolically linked file and its target file can be located on the same file system or on different file systems. A file with a hard link and its target file must be located on the same file system.
Device
special files represent physical devices, pseudodevices, and named pipes.
The
/dev
directory contains device special files.
Device
special files serve as the link between the system and the device drivers.
Each device special file corresponds to a physical device (for example, a
disk, tape, printer, or terminal) or a pseudodevice (for example, a network
interface, a named pipe, or a UNIX domain socket).
The driver handles all
read and write operations and follows the required protocols for the device.
There are three types of device special files:
Block device special files
Block device special files are used for devices whose driver handles
I/O in large blocks and where the kernel handles I/O buffering.
Physical
devices such as disks are defined as block device files.
An example of the
block device special files in the
/dev
directory follows:
brw------- 1 root system 8, 1 Jan 19 11:20 /dev/disk/dsk0a brw------- 1 root system 8, 1 Jan 19 10:09 /dev/disk/dsk0b
Character device special files
Character device special files are used for devices whose drivers handle
their own I/O buffering.
Disk, terminal, pseudoterminal, and tape drivers
are typically defined as character device files.
An example of the character
device special files in the
/dev
directory follows:
crw-rw-rw- 1 root system 7, 0 Jan 31 16:02 /dev/ptyp0 crw-rw-rw- 1 root system 7, 1 Jan 31 16:00 /dev/ptyp1 crw-rw-rw- 1 root system 9,1026 Jan 11 14:20 /dev/rtape/tap_01
Another case of a character device special file is the raw disk device, for example:
crw-rw-rw- 1 root system 7, 0 Jan 10 11:19 /dev/rdisk/dsk0a
Socket device files
The printer daemon ( lpd
) and error
logging daemon ( syslogd
) use the socket
device files.
An example of the socket device files in the
/dev
directory follows:
srw-rw-rw- 1 root system 0 Jan 22 03:40 /dev/log srwxrwxrwx 1 root system 0 Jan 22 03:41/dev/printer
For detailed information on device special files and their naming conventions,
refer to
Chapter 5.
6.2 Context-Dependent Symbolic Links and Clusters
This section describes Context-Dependent Symbolic Links (CDSLs), a feature of the directory hierarchy that supports joining systems into clusters. CDSLs impose certain requirements on the file system and directory hierarchy of all systems, even those which are not currently in a cluster. You should be aware of these requirements as follows:
The root (/
),
/var
,
and
/usr
file systems each have a
/cluster
subdirectory that is not used on a single system, but must not be deleted
or the system cannot be added into a cluster at some future time.
When systems are joined into clusters, they are designated as members of the cluster. There is a unique pathname to any file, including an identifier that is unique to the member system (member-specific). These pathnames are called context-dependent symbolic links (CDSLs). As the name implies, CDSLs are symbolic links with a variable element in the pathname. The variable element is different for each cluster member and provides the context when it is resolved by an application or command.
Some important system files reside in target directories which have unique CDSLs pointing to the target location. This design ensures that shared (cluster-wide) files are kept separate from unshared (member-specific) files.
Update installations may fail if CDSLs are moved or destroyed.
See the
hier
(5)
reference page for a description of the directory
structure.
CDSLs enable systems joined together as members of a cluster to have a global namespace for all files and directories they need to share. CDSLs allow base components and layered applications to be cluster aware. Shared files and directories work equally well on a cluster and a single system and file system administration tools work identically both on a single system and in a cluster.
If CDSLs are important to you because your systems may become cluster
members at some future date, you should read the following sections.
If you
encounter errors that refer to missing CDSLs (such as a failed update installation)
you may need to maintain, verify, or repair CDSLs as described in the following
sections.
6.2.1 Related Documentation
The following documents contain information about CDSLs:
The
Installation Guide
contains information about update installations.
The
installupdate
(8)
reference page describes the update installation
process.
The TruCluster documentation describes the process of adding a system to a cluster and further explains how CDSLs are utilized on a running cluster. Note that this documentation is not part of the base documentation set.
The
local
(4),
ls
(1),
ln
(1), and
hier
(5)
reference pages
provide reference information and information on commands.
The
cdslinvchk
(8)
reference page contains a discussion of the
/usr/sbin/cdslinvchk
script that you use to produce an inventory
of all CDSLs on a single system when the system is installed or updated.
Individual systems can be connected into clusters that appear as one system to users. A single system in a cluster is called a member. (See the TruCluster documentation for a description of a Tru64 UNIX cluster.) To facilitate clustering, file systems must have a structure and identifying pathname that allows certain files to be unique to the individual cluster member and contain member-specific information.
Other files may need to be shared by all members of a cluster. The CDSL pathname allows the different systems in a cluster to share the same file hierarchy. Users and applications can use traditional pathnames to access files and directories whether they are shared or member-specific.
For example, if two systems are standalone or simply connected by a
network link, each has an
/etc/passwd
file that contains
information about its authorized users.
When two systems are members of a
cluster, they share a common
/etc/passwd
file that contains
information about the authorized users for both systems.
Other shared files are:
Any configuration files and directories
that are site-specific rather than system-specific, such as
/etc/timezone
or
/etc/group
Files and directories that contain no customized information,
such as
/bin
or
/usr/bin
Any device special files for disk and tape devices that are available cluster-wide.
Some files must
always be member-specific; that is, not shared.
The file/etc/rc.config
is an example of a member-specific file while
rc.config.common
is a shared file.
These files contain configuration information
that either applies only to the individual system or to all members of a cluster.
CDSLs allow clustered systems to share files and to maintain the identity
of member-specific files.
Other categories of member-specific files are:
Certain directories, such as
/var/adm/crash
.
These directories will contain files that are created by applications, utilities,
or daemons that only apply to the individual cluster member.
Some device special files located in
/dev
and
/devices
.
Configuration files that reference member-specific device
special files, such as
/etc/securettys
.
Processor-specific files used during booting or configuration
such as
/vmunix
and
/etc/sysconfigtab
.
When a system is not connected to a cluster the pathnames are still
present, although they are transparent to users.
You must be aware of the
cluster file naming conventions, and must preserve the file structure.
If
a CDSL is accidentally removed, you may need to re-create it.
6.2.2.1 Structure of a CDSL
CDSLs are simply the symbolic links described in
ln
(1).
The links contain
a variable that identifies each system that is a cluster member.
This variable
is resolved at run time into a target.
A CDSL is structured as follows:
/etc/rc.config ->
/cluster/members/{memb}/etc/rc.config
Before support for clusters was introduced, the pathname for this file
was
/etc/rc.config
.
This file is now linked through a CDSL
to a member-specific target, and the structure of the link can be interpreted
as follows:
The
/cluster
directory resides in the root
directory and contains paths to the files that are either shared or (as in
this example) member-specific.
The
/cluster/members/
directory contains
a directory for the local member identifier,
member0
, and
a link to the variable path element
{memb}
.
The directory
/cluster/member0
contains member-specific system directories such
as
devices
and
etc
.
The
{memb}
variable path element
is used to identify individual members of a cluster.
At run time, this variable
is resolved to be
member
, appended with the value of the
sysconfigtab
variable
generic:memberid
.
The default
value for this variable is zero, and the value is unique for each member of
a cluster.
The file
/.local..
in root is a link to
cluster/members/{memb}
and defines the system-specific files.
Any
system-specific file can be referenced or created through the
/.local..
path.
A file created as
/.local../etc/[filename]
is not accessible through the path
/etc/[filename]
because
/etc
is a shared directory.
The file is only accessible through
/.local../etc/[filename]
and
/cluster/members/{memb}/etc/[filename]
.
When a single system is not clustered with other systems the variable
generic:memberid
is automatically set to zero.
An example of a typical
CDSL on a single system is:
/cluster/members/{memb}/etc/rc.config
This CDSL is resolved to :
/cluster/members/member0/etc/rc.config
When a system is
clustered with two other systems and the variable
generic:memberid
is set to three, the same CDSL is resolved to:
/cluster/members/member3/etc/rc.config
When running in a cluster, a file that is member-specific can be referenced in the following three ways:
From your specific system in a member-specific or shared format,
for example:
/var/adm/crash/crash-data.5
From your specific system in a member-specific format only,
for example:
/.local../var/adm/crash/crash-data.5
From any member of the cluster, for example:
/cluster/members/member0/var/adm/crash/crash-data.5
Two special cases of CDSLs exist only for members of a cluster:
Miniroot
Special Unshared Directories:
/dev ->
/cluster/members/{memb}/dev
/tmp ->
/cluster/members/{memb}/tmp
Refer to the TruCluster documentation for more information.
6.2.3 Maintaining CDSLs
Symbolically-linked files enjoy no special protection beyond the general user and file access mode protections afforded all files. CDSLs have no special protection either. On a single system, there are several situations that could cause it to fail when a CDSL has been broken:
Whenever an update installation to the operating system is performed.
On a system that is not in a cluster, you will become aware of missing
CDSLs only when you attempt to update the operating system using the update
installation process,
installupdate
(8)
and it fails.
To prevent this problem,
always run the
/usr/sbin/cdslinvchk
script before an update
installation in order to obtain its report on the state of CDSLs on your system.
When a user or application moves or removes a member-specific CDSL.
Member-specific CDSLs can be accidentally removed with the
rm
or
mv
commands.
To prevent this problem, avoid
manual edits and file creations and use tools such as
vipw
(for editing
/etc/passwd
) to edit files.
All system administration
tools and utilities are aware of CDSLs and should be the preferred method
for managing system files.
6.2.3.1 Checking CDSL Inventory
Use the script
/usr/sbin/cdslchkinv
to check the
CDSL inventory on a single system.
Periodically, revise the inventory and
check the CDSLs against it.
See
cdslchkinv
(8)
for information on
cdslchkinv
.
6.2.3.2 Creating CDSLs
If a CDSL is accidentally destroyed, or if a new CDSL must be created,
the process for repairing or creating links is described in
ln
(1).
For example, if the
/etc/rc.config
link is destroyed, you
create it as follows:
Check the value of
{memb}
, as defined by
the
sysconfigtab
variable
generic:memberid
Check that the file exists, for example:
#
ls /cluster/members/members3/etc/rc.config
For a
generic:memberid
of 3 , create a
new link as follows:
#
cd /etc
#
ln -s /cluster/members/member3/rc.config
6.3 Creating UFS File Systems Manually
The basic file system configuration for your operating system is defined during installation, when your system's root file system is established. After installation, you can create file systems as your needs evolve. The following sections describe how you create file systems manually, at the command line. Note that you must use command line operations on file systems when working at the console, when the system is in single-user mode and graphic utilities are unavailable.
For information on creating AdvFS file systems, refer to the
AdvFS Administration
guide.
6.3.1 Using newfs to Create a New File System
The typical procedure for creating a file system is as follows:
Identify the disk device and the raw disk partition that you
want to use for the new partition, ensuring that the partition is correctly
labeled and formatted and is not in use already.
Use the command-line interfaces
hwmgr
and
dsfmgr
to identify devices or to add
new devices and create the device special files.
This procedure is described
in
Chapter 5.
Refer to the
hwmgr
(8)
and
dsfmgr
(8)
reference pages for information on the command options.
If required, use the
disklabel -p
command to read
the current partition status of the disks.
Examine the
/etc/fstab
file to ensure that the partitions are not already allocated to
file systems, or used as swap devices.
(See the
disklabel
(8), and
fstab
(4)
reference pages for more information.)
Having identified which unused raw (character) disk partition
you will use, you can determine the special device file name for the partition.
For example, partition
g
on disk 2 will have a special
device file named
/dev/rdisk/dsk2g
.
(See
Chapter 5
for information on device special file names.)
Use the
newfs
command to create a file
system on the target partition.
(See to the
newfs
(8)
reference page
for more information.)
Create a mount point directory, and use the
mount
command to mount the new file system, making it available for use.
If you want the mount to persist across reboots, add a mount command to the
/etc/fstab
file.
If you want to export the file system, add it to
the
/etc/exports
file.
(See the
mount
(8)
reference page
for more information.)
Use the
chmod
command to check and adjust
any access control restrictions.
(See the
chmod
(1)
reference page for
more information.)
These steps are described in more detail in the remainder of this section.
The
newfs
command formats a disk partition and creates
a UFS file system.
Using the information
in the disk label or the default values specified in the
/etc/disktab
file, the
newfs
command builds a file system
on the specified disk partition.
You can also use
newfs
command options to specify the disk geometry.
Note
Changing the default disk geometry values may make it impossible for the
fsck
program to find the alternate superblocks if the standard superblock is lost.
The
newfs
command
has the following syntax:
/sbin/newfs
[-N
]
[newfs_options
]
special_device
[disk_type
]
You must specify the unmounted, raw device (for example,
/dev/rdisk/dsk0a
).
Refer to
newfs
(8)
for information on the command options specific
to file systems.
This reference page also provides information on the
mfs
command, and describes how you create a memory file system (mfs).
The following example shows the creation of a new file system:
Determine the target disk and partition. For most systems, your local administrative log book will tell you what disk devices are attached to a system and what partitions are assigned. However, you may be faced with administering a system that could be in an unknown state; that is, devices may have been removed or added. Use the following commands and utilities to assist you in identifying a target disk and partition:
Examine the contents of the
/dev/disk
directory.
Each known disk device has a set of device special files for the partition
layout.
For example,
/dev/disk/dsk1a
to
/dev/disk/dsk1h
tells you that there is a device named
dsk1
.
Devices may be available on the system, but without any device
special files.
Use the
hwmgr
command to examine all devices
that are physically known to the system and visible on a bus.
For example:
#
hwmgr -view devices -category disk
HWID: DSF Name Model Location ------------------------------------------------------- 15: /dev/disk/floppy0c 3.5in fdi0-unit-0 17: /dev/disk/dsk0c RZ1DF-CB bus-0-targ-0-lun-0 19: RZ1DF-CB bus-0-targ-1-lun-0 19: /dev/disk/cdrom0c RRD47 bus-0-targ-4-lun-0
If a device
is found for which no device special files exist, you can create the device
special files using the
dsfmgr
utility.
Note
Normally, device special files will be created automatically when a new disk device is added to the system. You will only need to create them manually under the circumstances described in Chapter 5.
Having identified a device, use the
disklabel
command to determine what partitions may be in use as follows:
#
disklabel -r /dev/rdisk/dsk0a
8 partitions: # size offset fstype [fsize bsize cpg] #NOTE: values not exact a: 262144 0 4.2BSD 1024 8192 16 #(Cyl. 0 -78*) b: 1048576 262144 swap #(Cyl. 78*-390*) c: 17773524 0 unused 0 0 #(Cyl. 0 -5289*) d: 1048576 1310720 swap #(Cyl. 390*-702*) e: 9664482 2359296 AdvFS #(Cyl. 702*-3578*) f: 5749746 12023778 unused 0 0 #(Cyl.3578*-5289*) g: 1433600 524288 unused 0 0 #(Cyl. 156*-582*) h: 15815636 1957888 unused 0 0 #(Cyl. 582*-5289*)
From the
disklabel
command
output, it appears that there are several unused partitions.
However, the
c
partition cannot be used as it overlaps with the other partitions.
Unless a custom disklabel has been created on the disk, only three possible
tables of standard partitions are available for use, as shown in
Table 6-1.
Table 6-1: Disk Partition Tables
Partition Table | Description |
c |
The entire disk is labeled as a single partition.
Therefore, other partitions overlap
c
and cannot be used. |
a b g h |
The disk is divided into four partitions.
Partition
a
can be used as a boot partition.
Partitions
c ,
d ,
e ,and
f
overlap and cannot be used |
a b d e f |
The disk is divided into five partitions.
Partition
a
can be used as a boot partition.
Partitions
c ,
g , and
h
overlap and cannot
be used. |
The disk listed in the output from the
disklabel
command in step 1.c already uses partitions
a
,
b
,
d
, and
e
.
Therefore it
is labelled for five partitions, and the
f
partition is
unused and available to be used for the new file system.
Note
If a custom disk label has been applied to the disk and partitions are extended, you may not be able to use a partition even if it is designated as unused. In this case, the
newfs
command will not be able to create the file system and will return an error message.
Use the
newfs
command to create a file
system on the target partition, as follows:
#
newfs /dev/rdisk/dsk0f
Warning: 2574 sector(s) in last cylinder unallocated /dev/rdisk/dsk0f: 5749746 sectors in 1712 cylinders of \ 20 tracks, 168 sectors 2807.5MB in 107 cyl groups (16 c/g, 26.25MB/g, 6336 i/g) super-block backups (for fsck -b #) at: 32, 53968, 107904, 161840, 215776, 269712, 323648, 377584, 431520, 485456, 539392, 593328, 647264, 701200, 755136, 809072, 863008, 916944, 970880, 1024816, 1078752, 1132688, 1186624, 1240560, . . .
The command output provides information on the size
of the new file system and lists the super-block backups that are used by
the file system checking utility
fsck
.
Refer to the
fsck
(8)
reference page for more information.
Mount the file system as described in the following sections.
6.3.2 Making File Systems Accessible to Users
You attach a file system to the file
system hierarchy using the
mount
command, which makes the
file system available for use.
The
mount
command attaches
the file system to an existing directory, which becomes the mount point for
the file system.
Note
The operating system does not support 4-Kb block-size file systems. The default block size for file systems is 8 kilobytes. To access the data on a disk that has 4-Kb block-size file systems, you must back up the disk to either a tape or a disk that has 8-Kb block-size file systems.
When you boot the system, file systems that are defined in the
/etc/fstab
file are mounted.
The
/etc/fstab
file contains entries that specify the device and partition where the file
system is located, the mount point, and additional information about the file
system, such as file system type.
If you are in single-user mode, the root
file system is mounted read only.
To change a file system's mount status, use the
mount
command with the
-u
option.
This is useful
if you try to reboot and the
/etc/fstab
file is unavailable.
If you try to reboot and the
/etc/fstab
file is corrupted,
use a command similar to the following:
#
mount -u /dev/disk/dsk0a /
The
/dev/disk/dsk0a
device is the root file system.
6.3.3 Using the /etc/fstab File
Either AdvFS or UFS can be the root file system, although AdvFS is used
by default if you do not specify UFS during installation.
If your system was
supplied with a factory-installed operating system, the root file system will
be AdvFS.
The operating system supports only one root file system from which
it accesses the executable kernel ( /vmunix
)
and other binaries and files that it needs to boot and initialize.
The root
file system is mounted at boot time and cannot be unmounted.
Other file systems
must be mounted, and the
/etc/fstab
file tells a booting
system what file systems to mount and where to mount them.
The
/etc/fstab
file contains descriptive information about file systems and is
read by commands such as the
mount
command.
When you boot
the system, the
/etc/fstab
file is read and the file systems
described in the file are mounted in the order that they appear in the file.
A file system is described on a single line; information on each line is separated
by tabs or spaces.
The order of entries in the
/etc/fstab
file is important
because the
mount
and
umount
commands
read and act on the file entries in the order that they appear.
You must be root user to edit the
/etc/fstab
file.
When you complete changes to the file and want to immediately apply
the changes, use the
mount -a
command.
Otherwise,
any changes you make to the file become effective only when you reboot the
system.
The following is an example of an
/etc/fstab
file:
/dev/disk/dsk2a / ufs rw 1 1 /dev/disk/dsk0g /usr ufs rw 1 2 /dev/disk/dsk2g /var ufs rw 1 2 /usr/man@tuscon /usr/man nfs rw,bg 0 0 proj_dmn#testing /projects/testing advfs rw 0 0 [1] [2] [3] [4] [5] [6]
Each line contains an entry and the information is separated either
by tabs or spaces.
An
/etc/fstab
file entry has the following
information:
Specifies the block special device or remote file system to be mounted. For UFS, the special file name is the block special file name, not the character special file name. For AdvFS, the special file name is a combination of the name of the file domain, a number sign (#), and the fileset name. [Return to example]
Specifies the mount point for the file system or remote directory
(for example,
/usr/man
) or
/projects/testing
.
[Return to example]
Specifies the type of file system, as follows:
cdfs |
Specifies an ISO 9600 or HS formatted (CD-ROM) file system. |
nfs |
Specifies NFS. |
procfs |
Specifies a
/proc
file
system, which is used for debugging. |
ufs |
Specifies a UFS file system or a swap partition. |
advfs |
Specifies an AdvFS file system. |
Describes the mount options associated with the partition. You can specify a list of options separated by commas. Usually, you specify the mount type and any additional options appropriate to the file system type, as follows:
ro |
Specifies that the file system is mounted with read-only access. |
rw |
Specifies that the file system is mounted with read-write access. |
userquota
groupquota |
Specifies that the file system is automatically
processed by the
By default, user and group quotas for a file system are contained in the
|
xx |
Specifies that the file system entry should be ignored. |
Used by the
dump
command to determine which
UFS file systems should be backed up.
If you specify the value 1, the file
system is backed up.
If you do not specify a value or if you specify 0 (zero),
the file system is not backed up.
[Return to example]
This is the pass number and is used to control parallelism
in the
fsck
(UFS) and
quotacheck
(UFS
and AdvFS) utilities when processing all the entries in the
/etc/fstab
file.
You can use this field to avoid saturating the system with
too much I/O to the same I/O subsystem by controlling the sequence of file
system checking during startup.
If you do not specify a pass number or if you specify 0 (zero), the file system is not checked. All entries with a pass number of 1 are processed one at a time (no parallelism). For the root file system, always specify 1. Entries with a pass number of 2 or greater will be processed in parallel based on the pass number assigned (with some exceptions). All entries with a pass number of 2 will be processed before pass number 3, pass number 3 will be processed before 4, and so on. The exceptions are multiple UFS file systems on separate partitions of the same disk or multiple AdvFS filesets in the same domain. These are processed one after the other if they all have the same pass number. All other file systems with the same pass number are processed in parallel. [Return to example]
See
fstab
(4)
for more information about its fields and options.
Swap partitions are configured in the
/etc/sysconfigtab
file as shown in the following example:
swapdevice=/dev/disk/dsk0b,/dev/disk/dsk0d vm-swap-eager=1
Refer to
Chapter 5
and
Chapter 12
and the
swapon
(8)
reference page for more information
on swapping and swap partitions.
6.3.4 Using the mount Command
You use the
mount
command to make a file system available
for use.
Unless you add the file system to the
/etc/fstab
file, the mount will be temporary and will not exist after you reboot the
system.
The
mount
command supports the UFS, AdvFS, NFS,
CDFS, and
/proc
file system types.
The following
mount
command syntax is for all file
systems:
mount
[- adflruv
]
[-o option
]
[-t type
]
[file_system
]
[ mount_point
]
For AdvFS, the file system argument has the following form:
domain#fileset
Specify the file system and the mount point, which is the directory on which you want to mount the file system. The directory must already exist on your system. If you are mounting a remote file system, use one of the following syntaxes to specify the file system:
host:remote_directory remote_directory@host
The following command lists the currently mounted file systems and the file system options.
#
mount -l
/dev/disk/dsk2a on / type ufs (rw,exec,suid,dev,nosync,noquota) /dev/disk/dsk0g on /usr type ufs (rw,exec,suid,dev,nosync,noquota) /dev/disk/dsk2g on /var type ufs (rw,exec,suid,dev,nosync,noquota) /dev/disk/dsk3c on /usr/users type ufs (rw,exec,suid,dev,nosync,noquota) /usr/share/man@tuscon on /usr/share/man type nfs (rw,exec,suid,dev, nosync,noquota,hard,intr,ac,cto,noconn,wsize=8192,rsize=8192, timeo=10,retrans=10,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60) proj_dmn#testing on /alpha_src type advfs (rw,exec,suid,dev,nosync,noquota)
The following command mounts the
/usr/homer
file
system located on host
acton
on the local
/homer
mount point with read-write access:
#
mount -t nfs -o rw acton:/usr/homer /homer
Refer to
mount
(8)
for more information on general options and
options specific to a file system type.
6.3.5 Using the umount Command
Use the
umount
command to unmount a file system.
You must unmount a file system if you want to check it with the
fsck
command.
If you want to change its partitions with the
disklabel
command, you must unmount the file system.
Be aware, however,
that changing partitions could destroy the file systems on the disk.
The
umount
command has the following syntax:
umount
[- afv
]
[- h host
]
[- t type
]
[ mount_point
]
If any user process (including a
cd
command) is in
effect within the file system, you cannot unmount the file system.
If the
file system is in use when the command is invoked, the system returns the
following error message and does not unmount the file system:
mount device busy
You cannot unmount the root file system with the
umount
command.
6.4 Administering UFS File Systems Using SysMan
In addition to the manual method of file system creation and administration, the operating system provides some graphical tools, and also some SysMan tasks, which can be used in different user environments. Refer to Chapter 1 for information on invoking and using SysMan. If you are using the Common Desktop Environment, other graphical utilities are available. Access these from the CDE Application Manager main folder as follows:
Click on the Application Manager icon from the CDE front panel
Select the System_Admin icon from the Application Manager folder window
Select the Storage_Management icon from the Application Manager - System_Admin folder window
Depending on what options are installed and licensed on your system, the following icons may be available in this window:
Advanced File System - Select this icon to run the AdvFS graphical interface. refer to the AdvFS Administration guide for more information.
See also the
dtadvfs
(8)
reference page for information on launching the
Advfs graphical interface from the command line.
Bootable Tape - Select this
icon to invoke the SysMan Bootable Tape Creation interface.
Use this
interface to create a bootable system image on tape.
This image will contain
a standalone kernel and copies of selected file systems that you specify during
creation.
You can recover the image using the
btextract
utility.
Refer to
Chapter 9
for information on using
the bootable tape interfaces.
See also the
btcreate
(8),
btextract
(8), and
bttape
(8)reference
pages.
The
bttape
command is used to launch the bootable
tape graphical interface from a command line or script.
File System Management - Select this icon to invoke the SysMan Storage utilities described in this section.
Logical Storage Manager (LSM) - Select this
icon to invoke the LSM graphical interface.
Logical Storage Management enables
you to create virtual disk volumes that appear as a single device to the system
and any applications.
Refer to the
Logical Storage Manager
guide for more information
and the
lsm
(8)
reference page for a list of LSM commands.
Tp nvoke this interface from the command line, use the
dxlsm
command.
Refer to the
dxlsm
(8)
reference page for more information.
Prestoserve I/O Accelerator - Select this icon to invoke
the Prestoserve graphical utilities.
Prestoserve stores synchronous disk writes
in nonvolatile memory instead of writing them to disk.
The stored data is
then written to disk asynchronously as needed or when the machine is halted.
Refer to the
Guide to Prestoserve
for more information and the
presto
(8)
reference page
for information on the command-line interface.
To invoke this interface from the command line, use the
dxpresto
command.
Refer to the
dxpresto
(8)
reference page for more information.
The following sections describe the UFS file system utilities in the SysMan Menu.
6.4.1 File System Tasks in the SysMan Menu
The SysMan Menu contains a main menu option titled Storage. When expanded, these options appear as follows:
- Storage - File Systems Management Utilities - General File System Utilities | Dismount a File System | Display Currently Mounted File Systems | Mount File Systems | Share Local Directory (/etc/exports) | Mount Network Directory (/etc/fstab) - Advanced File System (AdvFS) Utilities | Manage an AdvFS Domain | Manage an AdvFS File | Defragment an AdvFS Domain | Create a New AdvFS Domain | Create a New AdvFS Fileset | Recover Files from an AdvFS Domain | Repair an AdvFS Domain - Logical Storage Manager (LSM) Utilities | Initialize the Logical Storage Manager (LSM) - UNIX File System (UFS) Utilities | Create a New UFS File System
Each option provides a step-by-step interface to perform basic file system administrative tasks. Refer to Chapter 1 for information on invoking and using the SysMan Menu. You can also launch the file system utilities from the SysMan Station. For example, if you are using the SysMan Station to display the Mounted_Filesystems view, you can press MB3 to do the following:
Launch any available Storage options, such as Dismount to unmount a mounted file system.
Display properties of file systems such as the mount point or space used.
The SysMan Station Physical_Filesystems view provides a graphical view of file systems mapped to physical devices and enables you to perform tasks such as make AdvFS filesets on an existing domain. Refer to Chapter 1 for information on invoking and using the SysMan Station. Refer to the online help for information on using its file system options.
The following SysMan Menu Storage options are documented in other books:
Advanced File System (AdvFS) Utilities - Refer to the AdvFS Administration guide.
Logical Storage Manager (LSM) Utilities - Refer to the Logical Storage Manager guide.
The following sections describe the
General File System Utilities
and the
UNIX File System (UFS) Utilitie
s file
system tasks available from the SysMan Menu.
The typical procedure for
creating a file system is exactly as described in
Section 6.3,
although the SysMan Menu tasks are not organized in the same sequence.
These tasks are general-purpose utilities that you can use any time to create
and administer file systems.
6.4.2 Using SysMan to Dismount a File System
To dismount a file system you need to specify
its mount point, device special file name, or AdvFS domain name.
You can obtain
this information by using the
more
command to display the
contents of the
/etc/fstab
file, or by using the SysMan Menu
Storage option
Display Currently Mounted File Systems
described
in
Section 6.4.3.
Refer to the
mount
(8)
and
umount
(8)
reference pages for the command-line options.
The
Dismount a File System
option is available under
the SysMan Menu Storage options.
Expand the menu and select
General File System Utilities
if it is not displayed.
When you select
this option, a window titled Dismount a file system will be displayed, prompting
you to complete either of the following fields.
You do not need to complete
both fields:
Mount
point: - Enter the mount point on which the file system is currently
mounted, such as
/mnt
.
File system name: -
Enter the device special file name for the mounted partition, such as
/dev/disk/dsk0f
, or an AdvFS domain name such as
accounting_domain#act
.
Press the Apply button to dismount the file system and continue
dismounting other file systems, or press OK to dismount the file system and
exit.
6.4.3 Using SysMan to Display Mounted File Systems
The
option to display mounted file systems is available under the SysMan Menu
Storage options.
Expand the menu and select
General File System Utilities
-
Display Currently Mounted File Systems
.
When you select this option, a window titled Currently Mounted File Systems
is displayed, containing a list of the file systems similar to the following:
/dev/disk/dsk0a / /proc /proc usr_domain#usr /usr usr_domain#var /var 19serv:/share/19serv/tools/tools /tmp_mnt/19serv/tools. . .
The following information is provided in the window:
File System - This can be one of the following:
The special device file name from the
/dev/*
directories that maps to the mounted device partition.
The
pathname
/dev/disk/dsk0a
indicates partition
a
of disk
0
.
Refer to
Chapter 5
for information on device names and device special files.
An NFS (Network File System) mounted file share, possibly
mounted using the
automount
utility, which automatically
mounts exported networked file systems when a local user accesses (imports)
them.
Refer to the
Network Administration
guide for information on NFS.
An NFS mount
typically lists the exporting host system name, followed by the exported directory
as follows:
19serv:/share/19serv/tools/tools /tmp_mnt/19serv/tools
Where
19serv:
is the host name identifier followed
by a colon,
/share/19serv/tools/tools
is the pathname to
the exported directory and
/tmp_mnt/19serv/tools
is the
temporary mount point that is automatically created by NFS.
An AdvFS domain name
such as
usr_domain#var
.
Refer to
AdvFS Administration
or the
advfs
(4)
reference page for information on domains.
A descriptive
name, such as
file-on-file mount
, which would point to
a service mount point such as
/usr/net/servers/lanman/.ctrlpipe
Mount Point -
The directory on which the file system is mounted, such as
/usr
or
/accounting_files
.
The list can be extensive,
depending on the number of currently mounted file systems.
Note that the list
can provide information on current file-on-file mounts that may not be visible
in the
/etc/fstab
file.
Files in the
/etc/fstab
file that are not currently mounted will not be included in this
list.
The following option buttons are available from the Currently Mounted File Systems window:
Details... - Use this option to display detailed file system data, otherwise known as the properties of the file system. You can obtain the following data from this option:
File system name: /dev/disk/dsk0a Mount point: / File system size: 132 MBytes Space used: 82 MBytes Space available: 35 MBytes Space used %: 70%
Dismount...
- Use this
option to dismount a selected file system, You will be prompted to confirm
the dismount request.
Note that you may be unable to dismount the file system
if it is currently in use or even if a user has run the
cd
command to change directory to the file system that you want to dismount.
Use the
wall
command if you want to ask users to stop using
the file system.
Reload - Use this option to refresh the Currently Mounted File Systems list and update any file systems that were dismounted. Note that if you mount file systems using the command line, or if NFS mounts are established, these newly mounted systems will not be displayed until you exit the utility and invoke it again.
OK - press this button to exit the Currently Mounted File Systems window and return to the SysMan Menu.
6.4.4 Using SysMan to Mount File Systems
The operation of mounting a file system has the following prerequisites:
The file system must be listed in the
/etc/fstab
file.
The mount point
must exist.
If not, use the
mkdir
command to create a mount
point.
Refer to the
mkdir
(1)
reference page for information on this command.
The file system must be created on a disk partition, and the disk
must be on line.
Refer to
Section 6.4.7
for information
on creating UNIX File Systems (UFS) using the SysMan Menu.
See
Section 6.3.1
for information on manually creating file systems using the
newfs
command.
Refer to the
newfs
(8)
reference page for information on
this command.
Information on creating AdvFS file systems is located in the
AdvFS Administration
guide.
The
diskconfig
graphical utility provides a way to customize disk partitions and
write a file system on the partition in a single operation.
Refer to
Chapter 5
for information on the
diskconfig
command, and see the
diskconfig
(8)
reference page for information on
launching the utility.
You can also launch this utility from the SysMan Menu
or SysMan Station and form the CDE Application Manager.
Normally, the availability of disk devices is managed automatically by the system. However, if you have just added a device dynamically, while the system is still running, it may not yet be visible to the system and you may have to tell the system to find the device and bring it on line.
Use the
hwmgr
command to do this, and to check the
status of disk devices and partitions for existing disks (if necessary).
Refer
to the
hwmgr
(8)
reference page for information on this command.
Refer to
Chapter 5
for information on administering devices.
Normally, the device special files for
a disk partition, such as
/dev/disk/dsk5g
, are automatically
created and maintained by the system.
However if you do not find the device
special file, you may need to create it.
Refer to
Chapter 5
for information on the
dsfmgr
command, and see the
dsfmgr
(8)
reference page for
information on command options such as
dsfmgr -s
, which
lists the device special files for each device (Dev Node).
The option to mount a file
system is available under the SysMan Menu Storage options.
Expand the
menu and select
General File System Utilities
-
Mount File Systems
to display the Mount Operation window.
This interface
provides an alternative to the
mount
command, described
in the
mount
(8)
reference page.
This utility operates only on the file systems
currently listed in the
/etc/fstab
file.
You can obtain
information on the mounted file systems using the Display Mounted Filesystems SysMan Menu
option, described in
Section 6.4.3.
The Mount Operation window provides the following four exclusive options, which you select by clicking on the button:
Mount a specific file system
Select this option to mount a single specific file system. The File System name and Mount Point window will be displayed, prompting you to complete either of the following fields:
Mount point: - Type the mount point directory from the
/etc/fstab
file, such as
/cdrom
File system name: - Type a device special file name,
such as
/dev/disk/cdrom0c
.
Alternatively, type an AdvFS
domain name, such as
usr_domain#usr
.
The File System Mounting Options window will be displayed next. This window is common to several of the mounting operations, and is described at the end of this list.
Mount all file systems listed in
/etc/fstab
Use this option to mount all file systems currently listed in the
/etc/fstab
file.
Using the option assumes that all the specified
partitions or domains are online, and all the mount points have been created.
The File System Mounting Options window will be displayed next. This window is common to several of the mounting operations, and is described at the end of this list.
As above, but only those of a specified type
Use this option to mount all file systems of a specified type listed
in the
/etc/fstab
file.
Using the option assumes that all
the specified partitions or domains are online, and all the mount points have
been created.
You specify the file system type in the File System Mounting Options window, which will be displayed next. This window is common to several of the mounting operations, and is described at the end of this list. For example, you can choose to include only AdvFS file systems.
Mount all file systems NOT of the selected type
Use this option to exclude from the mount operation, all file systems
of a specified type listed in the
/etc/fstab
file.
Using
the option assumes that all the specified partitions or domains are on line,
and all the mount points have been created.
You specify the file system type to be excluded in the File System Mounting Options window, which will be displayed next. This window is common to several of the mounting operations, and is described at the end of this list. For example, you can choose to exclude only UFS file systems.
The File System Mounting Options window is common to several of the preceding list of mount options, and enables you to specify additional optional characteristics for the mount operation. Some options may not be available, depending on the type of mount operation that you are attempting. The following options are available from this window:
Access Mode - Click on the appropriate button for the type of access that you want to enable:
Read/Write - Select this option to permit authorized users to read from and write to files in the file system.
Read only - Select this option to permit authorized users only to read from files in the file system, or to mount read-only media such as a CD-ROM volume.
File system type - From the menu, select one of the following options:
Unspecified - Select this option to allow any file system specification.
AdvFS - Select this option to specify an Advanced File
System type.
Refer to
AdvFS Administration
or the
advfs
(4)
reference page
for more information.
UFS - Select this option to specify a UNIX File System type. Refer to Section 6.1.4 for a description of this file system.
NFS - Select this option to specify a Networked File
System.
Refer to the
Network Administration
guide and the
nfs
(4)
reference page
for more information.
CDFS - Select this option to specify a Compact Disk
Read Only Memory File System.
Refer to the
cdfs
(4)
reference page for
more information.
Other - Select this option to enter your own file system choice in the Other file system type: field described in the next item.
Other file system type - Type the designation for the
file system such as
mfs
for the memory file system (ram
disk).
Refer to the
mount
(8)
reference page for more information on supported
file systems, and see the individual file system reference pages, such as
mfs
(4)
for the memory file system.
Advanced Mount options - Type any advanced mount options
that you want for the file system.
For example, the
dirty
option, which allows a file system to be mounted even if it was not dismounted
cleanly, such as after a system crash.
Refer to the
mount
(8)
reference page
for more information on the various options.
When you have entered the options you want, use the Finish button to process the mount operation and return to the SysMan Menu options. Use the Back button to return to the Mount Operation window and process new mount operations, or the Cancel button to abort the mount operation.
If data in any field is incomplete or incorrect, you will be prompted
to correct it before the mount operation can proceed.
6.4.5 Using SysMan to Share a Local Directory
File sharing involves adding file systems to the
/etc/exports
file so that users of other host systems can mount the shared directories
via NFS (Network File System).
Note that if the Advanced Server for UNIX (ASU)
is installed and running, you may have further options to share file systems
with PC clients.
Refer to the ASU
Concepts and Planning Guide.
You may also have to enable network access to your system for remote
hosts to mount the shared directories, such as by adding the hosts to the
/etc/hosts
file, setting up NFS, and running
dxhosts
.
Refer to the
Network Administration
guide for information on configuring your system
to allow incoming connections to shared file systems.
You
can also manage shared file systems using the
dxfileshare
graphical interface, which can be launched from the command line or from the
CDE Application Manager - DailyAdmin folder.
See the File Sharing option
in that folder.
Online help is available for this interface.
Refer to the
dxfileshare
(8)
reference page for more information on invoking the interface.
The only prerequisite for shared file systems is that you should have
already created disk file systems that are suitable for sharing as described
in
Section 6.3.1
(manual method) or
Section 6.4.7
(using SysMan Menu options).
You specify the shared file system by its
directory pathname, such as
/usr/users/share
.
The file system sharing option is available under the SysMan Menu Storage branch as follows:
-Storage - File Systems Management Utilities - General File System Utilities | Share Local Directory (/etc/exports).
Follow these steps to share an existing file system:
In the window titled Share Local Directory on hostname.xxx.yyy.xxx, any existing shares are listed in the first box, identified by the directory pathname. Press the Add... button to add a directory to the list.
A window titled Share Local Directory: Add Local Directory is displayed next. Complete the fields as follows:
In the field labeled Share This Directory: type the directory
pathname, such as
/usr/users/share/tools
.
Choose whether to share the directory with read/write access or read-only access. The Read/Write check button is selected by default.
Choose whether to share the directory with all qualified hosts (remote systems) or only with named hosts as follows. For all hosts, check the All button. For selected hosts, check the Selected button and then add hosts to the Selected Hosts With Access list as follows:
Enter the host name and address, such as
dplhst.xxx.yyy.com
.
Note that the host must be known to your local host, either through
the
/etc/hosts
file or via a domain name server (DNS).
Refer to the
Network Administration
guide for more information.
Select OK to validate the data and close the dialog box and return to the window titled Share Local Directory on host name. Note that all changes are deferred until you press OK in this window. When you press OK, the directories are made available for sharing.
To remove a share, you use the same utility as follows:
Deleting hosts from the access list
Modifying access to shared file systems by changing the read/write permissions or removing selected hosts from the access list
Deleting shared file systems from the shared list to prevent any access
6.4.6 Using SysMan to Mount a Network File System
You can mount shared file systems that are shared (exported) by other
hosts using the Network File System (NFS).
Your local system (host) must be
configured to import NFS-shared file systems, including authorized network
access to remote hosts.
Remote systems (hosts) must be configured to share
or export file systems by specifying your system in their
/etc/exports
files.
You can mount NFS-shared file systems in several ways:
Temporarily, where the mount will not persist across a reboot. A mount point will be created and the file system will be connected for the current session. If the system is shut down for any reason, the mount point will persist but the file-system connection will be lost and will not be reestablished when the system is booted.
Permanently, by specifying the shared NFS file systems in
your local
/etc/fstab
file.
For example, your
/etc/fstab
file may already have one or more NFS file system entries
similar to the following:
/usr/lib/toolbox@ntsv /usr/lib/toolbox nfs rw,bg,soft,nosuid 0 0
(See
Section 6.3.3
for a description of
the structure of an
/etc/fstab
file.)
Automatically
on request from a user, using the NFS
automount
utility.
Refer to the
Network Administration
guide and the
automount
(8)
reference
page for information on using this option.
Using
automount
will enable your local users to transparently mount any file systems that
are shared with (exported to) your local system.
You will not need to constantly
respond to mount requests from users.
The information in this section enables you to add more NFS
shares permanently to your
/etc/fstab
file or to create
temporary imports of shared file systems.
Refer to the Network Administration guide for information on configuring the network and NFS. See Section 6.4.5 for a description of the process of sharing (exporting) file systems using the SysMan Menu options.
You can also manage shared file systems using the
dxfileshare
graphical interface, which can be launched from the command line,
or from the CDE Application Manager - DailyAdmin folder.
See the File
Sharing option in that folder.
Online help is available for this interface.
Refer to the
dxfileshare
(8)
reference page for more information on invoking
the interface.
The option to mount NFS file systems is available under the SysMan Menu
Storage options.
Expand the menu and select
General File System Utilities
-
Mount Network Directory
(/etc/fstab
).
Follow these steps to mount a shared file system:
In the window titled Mount Network Directory on
hostname, you will see a list of existing available NFS shared
file systems listed in the
/etc/fstab
file, which provides
you with the following information:
Directory and Host - The name of the host, and the directory it is exporting to your local system.
Mounted On - The local mount point on which the shared
file system is mounted.
This is a directory pathname, such as
/tools/bin/imaging
.
Options - The access options for the directory, which can be:
Read/Write - Allows users to both read data from and write data to the shared file system. Note that this may be dependent on access conditions set by the exporting host.
Read-Only - Allows users only to read data from the shared file system.
Reboot - Indicates whether the mount will be reestablished if the system is shut down for any reason, and can be:
true - Permanent; the entry is in the local
/etc/fstab
file and the mount will persist across reboots.
false - Temporary; the entry is not in the local
/etc/fstab
file and the mount will not persist.
To add a file system to the list of NFS-shared directories, press the Add... button. A window titled Mount Network Directory: Add Network Directory will be displayed.
When you use this option, file systems will be mounted with the options
hard
(retries until a response is received) and
bg
(background mount) by default.
Refer to the
mount
(8)
reference page
for more information on these options.
Follow these steps to add an NFS-shared file system:
Remote Host Name - Enter the name of the host sharing
the file system.
This can be the fully qualified name, such as
ntsv.aaa.bbb.com
or an alias listed in your
/etc/hosts
file.
Remote Directory Path - Enter the directory pathname
of the share, such as
/tools/toolbox/admin
.
You may need
to verify this information from the
/etc/exports
file entries
in the remote host.
Local Mount Point - Enter the pathname to the mount
point that you want to use on the local host.
This need not be the same as
the remote pathname, but might be something that will indicate what is mounted,
for example:
/tools/remote_admin_tools
.
If the mount point does not exist, you will be given the option to create it.
Access Permission - Specify the user access to the file system as follows:
Read/Write - Allows users to both read data from and write data to the shared file system. Note that this may be dependent on access conditions set by the exporting host.
Read-Only - Allows users only to read data from the shared file system.
Mount on Reboot (put in
/etc/fstab
) -
This checkbox determines whether the mount is permanent or temporary as follows:
Checked - Permanent; the entry is in the local
/etc/fstab
file and the file system will be remounted when the system
is rebooted.
Unchecked - Temporary; the entry is not in the local
/etc/fstab
file and the file system will not be remounted when the
system is rebooted.
Press the OK button to validate the share and return to the previous window. Press the Apply button to validate the share and continue adding more NFS-shared file systems. (Press Cancel to abort the operation and return to the previous window).
Permanent changes are deferred until you return to the Mount Network Directory on hostname and press OK. When you choose the OK option, the file systems will be mounted.
The Mount Network Directory (/etc/fstab
)
option is also used for the following tasks:
Modify... - A window titled Mount Network Directory: Add Network Directory will be displayed, enabling you to change details of an existing share mount entry, such as changing the user access from the Read-only option to the Read/Write option.
Delete - Select one of the listed share mounts and press
this button to remove it from the list.
Select OK to unmount the file system
and remove it from the
/etc/fstab
file.
Note that it may
not always be possible for an unmount operation to complete.
For example a
user may be accessing the directory at the time the unmount command is issued.
You should verify that the file system was unmounted and if necessary use
the option described in
Section 6.4.2.
6.4.7 Using SysMan to Create a UFS File System
Creating a UFS file system manually using the
newfs
command is described in
Section 6.3.1
and the same prerequisites
and sources of data apply to the process of creating a file system with the SysMan Menu
options, except that you are limited to standard disk partitions.
If you want
to use custom partitions, use the
diskconfig
utility as
described in
Chapter 5.
Obtain the following items of data before proceeding:
Information about where the file system is to be stored, specified by either of the following:
The device special file name of the disk partition on which
the file system is to be created, such as
/dev/disk/dsk13h
for the
h
partition on disk 13.
If the Logical Storage Manager application is in use, an LSM volume name. Refer to the Logical Storage Manager for more information.
The disk model, such as RZ1DF-CB.
You can obtain such
information using the
hwmgr
command as follows:
#
hwmgr -view devices
Alternatively, use the SysMan Station
Hardware View, select the disk, press MB3 and choose Properties...
from the
pop-up menu to view details of the device.
The
/etc/disktab
file is a source of information on disk models.
Refer to the
disktab
(4)
reference page for information on the
/etc/disktab
file
structure.
Determine whether you need any particular
options for the file system, such as block size or optimization.
Refer to
the
newfs
(8)
reference pages for a complete list of options.
You can also
display the options from within the SysMan Menu utility.
The option to create a new UFS file system is available under the SysMan Menu
Storage options.
Expand the menu and select
UNIX File System (UFS)
Utilities
-
Create a New UFS File System
.
A window titled Create a new UFS File System is displayed next.
Follow these
steps to create a file system:
Partition or LSM Volume - Type the name of the disk partition or LSM volume that you selected to store the file system
Disk type - Type the name of the disk model, such as RWZ21.
Advanced newfs options - Enter any option flags, such
as
-b 64
for a 64 kilobyte block size.
Note that if you are unsure what options to use, clear all fields and
press the Apply button.
This will display a
newfs
information
window, containing a list of flag options.
Press the OK button to create the file system and exit to the SysMan Menu or press the Apply button to create the file system and continue creating more file systems. To abort the operation, press cancel.
Use the SysMan Menu option Mount File Systems described in
Section 6.4.4
to mount the newly created file systems.
6.5 Managing Quotas
This section describes user and group quotas for UFS. AdvFS also supports fileset quotas, which limit the amount of space a fileset can have. For information about AdvFS fileset quotas, see AdvFS Administration, which also has AdvFS-specific information about user and group quotas.
As a system administrator, you establish usage limits for user accounts and for groups by setting quotas for the file systems they use. Thus, user and group quotas are also known as file system quotas. The file system quotas are also known as disk quotas because, when established, they limit the number of disk blocks that can be used by a user account or a group of users.
You set quotas for user accounts and groups by file system. For example, a user account can be a member of several groups on a file system and also a member of other groups on other file systems. The file system quota for a user account is for a user account's files on that file system. A user account's quota is exceeded when the number of blocks (or inodes) used on that file system are exceeded.
Like user account quotas, a group's quota is exceeded when the number of blocks (or inodes) used on a particular file system is exceeded. However, the group blocks or inodes used only count toward a group's quota when the files that are produced are assigned the group ID (GID) for the group. Files that are written by the members of the group that are not assigned the GID of the group do not count toward the group quota.
Note
Quota commands display block sizes of 1024-bytes instead of the more common 512-byte size.
You can apply quotas to file systems to establish a limit on the number
of blocks and inodes (or files) that a user account or a group of users can
allocate.
You can set a separate quota for each user or group of users on
each file system.
You may want to set quotas on file systems that contain
home directories, such as
/usr/users
, because the sizes
of these file systems can increase more significantly than other file systems.
You should avoid setting quotas on the
/tmp
file system.
6.5.1 Hard and Soft Quota Limits
File system quotas can have both soft and hard quota limits. When a hard limit is reached, no more disk space allocations or file creations that would exceed the limit are allowed. A hard limit is one more unit (such as one more block, file, or inode) than will be allowed when the quota limit is active.
The quota is up to, but not including the limit. For example, if a hard limit of 10,000 disk blocks is set for each user account in a file system, an account reaches the hard limit when 9,999 disk blocks have been allocated. For a maximum of 10,000 complete blocks for the user account, the hard limit should be set to 10,001.
The soft limit may be reached for a period of time (called the grace period). If the soft limit is reached for an amount of time that exceeds the grace period, no more disk space allocations or file creations are allowed until enough disk space is freed or enough files are deleted to bring the disk space usage or number of files below the soft limit.
As an administrator, you should set the grace period large enough for users to finish current work and then delete files to get their quotas down below the limits you have set.
Caution
With both hard and soft limits, it is possible for a file to be partially written if the quota limit is reached when the write occurs. This can result in the loss of data unless the file is saved elsewhere or the process is stopped.
For example, if you are editing a file and exceed a quota limit, do not abort the editor or write the file because data may be lost. Instead, escape from the editor you are using, remove the files, and return to the session. You can also write the file to another file system, such as
/tmp
, remove files from the file system whose quota you reached, and then move the file back to that file system.
6.5.2 Activating File System Quotas
To activate file system quotas on UFS, perform the following steps.
Configure the system to include the file system quota subsystem
by editing the
/sys/conf/NAME
system configuration file to include the following line:
options QUOTA
Edit the
/etc/fstab
file and change the
fourth field of the file system's entry to read
rw
,
userquota
, and
groupquota
.
Refer to the
fstab
(4)
reference page for more information.
Use the
quotacheck
command to create a
quota file where the quota subsystem stores current allocations and quota
limits.
Refer to the
quotacheck
(8)
reference page for command information.
Use the
edquota
command to activate the
quota editor and create a quota entry for each user.
For each user or group you specify,
edquota
creates
a temporary ASCII file that you edit with any text editor.
Edit the file to
include entries for each file system with quotas enforced, the soft and hard
limits for blocks and inodes (or files), and the grace period.
If you specify more than one user name or group name in the
edquota
command line, the edits will affect each user or group.
You can also use prototypes that allow you to quickly set up quotas for groups
of users as described in
Section 6.5.3.
Use the
quotaon
command to activate the
quota system.
Refer to the
quotaon
(8)
reference page for more information.
To check and enable file system quotas during system startup,
use the following command to set the file system quota configuration variable
in the
/etc/rc.config
file:
#
/usr/sbin/rcmgr set QUOTA_CONFIG yes
Note
Setting
QUOTQ_CONFIG
to yes causes thequotacheck
command to be run against the UFS file systems during startup. The AdvFS design does not need this service. While it is not recommended, you can force quotacheck to be run against both UFS and AdvFS file systems during system startup using the following command:#
/usr/sbin/rcmgr set \ QUOTACHECK_CONFIG -a
To restore the default UFS-only
quotacheck
behavior, use the following command:#
/usr/sbin/rcmgr set \ QUOTACHECK_CONFIG ""
If you want to turn off quotas, use the
quotaoff
command.
Also, the
umount
command turns off quotas before
it unmounts a file system.
Refer to
quotaoff
(8)
for more information.
6.5.3 Setting File System Quotas for User Accounts
To set a file system quota for a user, you can create a quota prototype
or you can use an existing quota prototype and replicate it for the user.
A quota prototype is an equivalence of an existing user's quotas to a prototype
file, which is then used to generate identical user quotas for other users.
Use the
edquota
command to create prototypes.
If you do
not have a quota prototype, create one by following these steps:
Log in as root and use the
edquota
command
with the following syntax:
edquota
proto-user users
For example, to set up a quota prototype named
large
for user
eddie
, enter the following command:
#
edquota large eddie
The program creates the
large
quota
prototype for user
eddie
.
You must use a real login name
for the
users
argument.
Edit the quota file opened by the
edquota
program to set quotas for each file system that user
eddie
can access.
To use an existing quota prototype for a user:
Enter the
edquota
command with the following
syntax:
edquota -p
proto-userusers
For example, to set a file system quota for the user
marcy
, using the
large
prototype, enter:
#
edquota -p large marcy
Confirm that the quotas are what you want to set for user
marcy
.
If not, edit the quota file and set new quotas for each file
system that user
marcy
can access.
6.5.4 Verifying File System Quotas
If you are enforcing user file system quotas, you should periodically
verify your quota system.
You can use the
quotacheck
,
quota
, and
repquota
commands to compare the established
limits with actual use.
The
quotacheck
command verifies that the actual block
use is consistent with established limits.
You should run the
quotacheck
command twice: when quotas are first enabled on a file system
(UFS and AdvFS) and after each reboot (UFS only).
The command gives more
accurate information when there is no activity on the system.
The
quota
command displays the actual block use for
each user in a file system.
Only the root user can execute the
quota
command.
The
repquota
command displays the actual disk use
and quotas for the specified file system.
For each user, the current number
of files and the amount of space used (in kilobytes) is displayed along with
any quotas.
If you find it necessary to change the established quotas, use the
edquota
command, which allows you to set or change the limits for
each user.
Refer to
quotacheck
(8),
quota
(8), and
repquota
(8)
for more
information on file system quotas.
6.6 Backing Up and Restoring File Systems
The principal backup and restore utilities for both AdvFS and UFS are
the
vdump
and the
vrestore
utilities.
These utilities are used for local operations on both AdvFS and UFS file systems.
The utilities are described in
vdump
(8)
and
vrestore
(8).
For remote
backup and restore operations on both AdvFS and UFS file systems, the utilities
are
rvdump
and
rvrestore
.
For administrators who want to back up only UFS, the traditional utilities
are described in
dump
(8)
and
restore
(8).
Examples of backup and restore operations for AdvFS are described in AdvFS Administration. Examples of backup and restore operations for UFS are described in Chapter 9, which also describes the process for creating a bootable tape. While this is not strictly a backup, it does provide a method of creating a bootable magnetic tape copy of the root file system and important system files from which you can boot the system and recover from a disaster such as a root disk crash.
Another archiving service is the Networker Save and Restore product,
also described in
Chapter 9.
6.7 Monitoring and Tuning File Systems
The following sections describe commands you use to display information
about, and check UFS file systems.
They also include some basic information
on file system tuning.
For a more detailed discussion of tuning, refer to
the
System Configuration and Tuning
guide.
6.7.1 Checking UFS Consistency
The
fsck
program checks UFS
and
performs some corrections to help ensure a reliable environment for file storage
on disks.
The
fsck
program can correct file system inconsistencies
such as unreferenced inodes, missing blocks in the free list, or incorrect
counts in the superblock.
File systems can become corrupted in many ways, such as improper shutdown
procedures, hardware failures, power outages, and power surges.
A file system
can also become corrupted if you physically write protect a mounted file system,
take a mounted file system off line, or if you do not use the
sync
command before you shut the system down.
At boot time, the system runs
fsck
noninteractively,
making any corrections that can be done safely.
If it encounters an unexpected
inconsistency, the
fsck
program exits, leaves the system
in single-user mode, and displays a recommendation that you run the program
manually, which allows you to respond yes or no to the prompts that
fsck
displays.
The command to invoke the
fsck
program has the following
syntax:
/usr/sbin/fsck
[options ...
]
[file_system ...
]
If you do not specify a file system, all the file systems in the
/etc/fstab
file are checked.
If you specify a file system, you
should always use the raw device.
Refer to the
fsck
(8)
reference page for information about command
options.
Note
To check the root file system, you must be in single-user mode, and the file system must be mounted read only. To shut down the system to single-user mode use the
shutdown
command that is described in Chapter 2.
6.7.2 Monitoring File System Use of Disks
To ensure an adequate amount of free disk space, you should regularly monitor the disk use of your configured file systems. You can do this in any of the following ways:
Check available free space by using the
df
command
Check disk use by using the
du
command
or the
quot
command
Verify file system quotas (if imposed) by using the
quota
command
You can use the
quota
command only if you are the
root user.
6.7.2.1 Checking Available Free Space
To ensure sufficient space
for your configured file systems, you should regularly use the
df
command to check the amount of free disk space in all of the mounted
file systems.
The
df
command displays statistics about
the amount of free disk space on a specified file system or on a file system
that contains a specified file.
The
df
command has the following syntax:
df
[- eiknPt
]
[-F fstype
]
[file
]
[ file_system
...]
With no arguments or options, the
df
command displays
the amount of free disk space on all of the mounted file systems.
For each
file system, the
df
command reports the file system's configured
size in 512-byte blocks, unless you specify the
-k
option, which reports the size in kilobyte blocks.
The command displays the
total amount of space, the amount presently used, the amount presently available
(free), the percentage used, and the directory on which the file system is
mounted.
For AdvFS file domains, the
df
command displays disk
space usage information for each fileset.
If you specify a device that has no file systems mounted on it,
df
displays the information for the root file system.
You can specify a file pathname to display the amount of available disk space on the file system that contains the file.
You cannot use the
df
command with the block or character
special device name to find free space on an unmounted file system.
Instead,
use the
dumpfs
command.
Refer to
df
(1)
for more information.
The following example displays disk space information about all the mounted file systems:
#
/sbin/df
Filesystem 512-blks used avail capacity Mounted on /dev/disk/dsk2a 30686 21438 6178 77% / /dev/disk/dsk0g 549328 378778 115616 76% /usr /dev/disk/dsk2 101372 5376 85858 5% /var /dev/disk/dsk3 394796 12 355304 0% /usr/users /usr/share/man@tsts 557614 449234 52620 89% /usr/share/man domain#usr 838432 680320 158112 81% /usr
Note
The
newfs
command reserves a percentage of the file system disk space for allocation and block layout. This can cause thedf
command to report that a file system is using more than 100 percent of its capacity. You can change this percentage by using thetunefs
command with the-minfree
flag.
If you determine that a file system has insufficient
space available, check how its space is being used.
You can do this with
the
du
command or the
quot
command.
The
du
command pinpoints disk space allocation by
directory.
With this information you can decide who is using the most space
and who should free up disk space.
The
du
command has the following syntax:
/usr/bin/du
[- aklrsx
]
[ directory
... filename
...]
The
du
command displays the number of blocks contained
in all directories (listed recursively) within each specified directory, file
name, or (if none are specified) the current working directory.
The block
count includes the indirect blocks of each file in 1-kilobyte units, independent
of the cluster size used by the system.
If you do not specify any options, an entry is generated only for each
directory.
Refer to
du
(1)
for more information on command options.
The following example displays a summary of blocks that all main subdirectories
in the
/usr/users
directory use:
#
/usr/bin/du -s /usr/users/*
440 /usr/users/barnam 43 /usr/users/broland 747 /usr/users/frome 6804 /usr/users/norse 11183 /usr/users/rubin 2274 /usr/users/somer
From this information, you can determine that user rubin is using the most disk space.
The following example displays the space that each file and subdirectory
in the
/usr/users/rubin/online
directory uses:
#
/usr/bin/du -a /usr/users/rubin/online
1 /usr/users/rubin/online/inof/license 2 /usr/users/rubin/online/inof 7 /usr/users/rubin/online/TOC_ft1 16 /usr/users/rubin/online/build . . . 251 /usr/users/rubin/online
As an alternative to the
du
command, you can use
the
ls -s
command to obtain the size and usage of
files.
Do not use the
ls -l
command to obtain usage
information;
ls -l
displays only file sizes.
You can use the
quot
command to list the number of
blocks in the named file system currently owned by each user.
You must be root user to use the
quot
command.
The
quot
command has the following syntax:
/usr/sbin/quot
[-c
]
[-f
]
[-n
]
[file_system
]
The following example displays the number of blocks used by each user
and the number of files owned by each user in the
/dev/disk/dsk0h
file system:
#
/usr/sbin/quot -f /dev/disk/dsk0h
The character device special file must be used to return the information for UFS files, because when the device is mounted the block special device file is busy.
Refer to
quot
(8)
for more information.
6.7.3 Improving UFS read Efficiency
To enhance the efficiency of UFS reads, use the
tunefs
command to change a file system's dynamic parameters, which affect
layout policies.
The
tunefs
command has the following syntax:
tunefs
[- a maxc
]
[- d rotd
]
[- e maxb
]
[- m minf
]
[- o op t
]
[ file_s
]
You can use the
tunefs
command on both mounted and
unmounted file systems; however, changes are applied only if you use the command
on unmounted file systems.
If you specify the root file system, you must also
reboot to apply the changes.
You can use command options to specify the dynamic parameters that affect
the disk partition layout policies.
Refer to
tunefs
(8)
for more information
on the command options and to
sys_attrs_ufs
(5)
for information on UFS subsystem attributes.
6.8 Troubleshooting File Systems
The following tools can be used to react to problems associated with UFS file systems:
Using the UNIX Shell Option
The UNIX Shell Option is an installation option for experienced administrators and is available during either a textual or graphical installation of the operating system. For example, you may be able to recover from a corrupted root file system using this option.
See the Installation Guide for an introduction to this installation option and the Installation Guide -- Advanced Topics for an explanation of the file-system related administration you can accomplish with it. The option can be used for both AdvFS and UFS file system problems.
Using the
/usr/field
directory and the
fsx
command
The
/usr/field
directory contains programs related
to the field maintenance of the operating system.
You can use the programs
in this directory to monitor and exercise components of the operating system
and system hardware.
The
fsx
utility exercises file systems.
Information
about the program is in
fsx
(8).
Other programs in the directory, such as a tape
exerciser (tapex
) and a disk exerciser (diskx
) might be useful when investigating file system problems.
The
dumpfs
utility displays information
on UFS file systems.
Refer to the
dumpfs
(8)
reference page.
EVM (the Event Manager) can be used to filter and display events that are related to file system problems. This utility is useful for setting up preventative maintenance and monitoring of file systems and storage devices. Refer to Chapter 13 for information.
The SysMan Station and Insight Manager provide graphical views of file systems and can be used to monitor and troubleshoot file system problems, such as lack of disk space. Refer to Chapter 1 for information.