DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
Managing system performance

Managing disk resource usage

You need to monitor disk use carefully to prevent running out of disk space. There are ways that you can make better use of the disk space you have, or recover used space. You might also consider adding additional hard disks to your system to increase the amount of disk space available.

Managing disk space

Make better use of the disk space on your system by:

Monitoring filesystem use

You can use the df(1M) command to monitor filesystem use. In general, filesystems should have at least 10 to 15 percent of their capacity available. If available space falls below 10 percent, filesystem fragmentation increases and performance is degraded. Note that when you are using sfs, the secure filesystem, or the ufs filesystem, if the available space falls below 10 percent of capacity, non-root users cannot write to the filesystem.

The default system configuration is set up so the filesystem blocks are allocated in an optimum way for most environments. See ``Managing filesystem types'' for more information on filesystem allocation.

Balancing filesystem space: moving user directories

You can also control filesystem space by balancing the load between filesystems. To do this, user directories often need to be moved. It is best to group users with common interests in the same filesystem.

To balance filesystem space:

  1. Determine which user directories you want to move and notify the users.


    NOTE: Be sure to notify users of moves well enough in advance so they can program around the expected change. Make sure they are not logged in to the system when you move their directories.

  2. Use the find(1) and cpio(1) commands to move directories and manipulate the filesystem tree. Move groups of users with a single cpio command to avoid unlinking and duplicating linked files.

    Example: Move directory trees userx and usery from filesystem fs1 to fs2 where there is more space available.

       cd /fs1
       find userx usery -print -depth | cpio -pdm /fs2
    

  3. Verify that the copy was made.

  4. Create new default login directories for userx and usery by running
       /usr/sbin/usermod -d /fs2/userx userx
       /usr/sbin/usermod -d /fs2/usery usery
    

  5. Remove the old default login directories by running
       rm -rf /fs1/userx /fs1/usery
    

  6. Send electronic mail to userx and usery to notify them that their login directories have been moved and their pathname dependencies may need to be changed.

Controlling directory size

Very large directories are inefficient and they can affect performance. If a directory becomes bigger than 10K (twenty 512-byte blocks or about 600 entries of average name length), then directory searches can cause performance problems. For larger block sizes, bigger directories are less of a problem, but they should be watched carefully. The find command can locate large directories.

   find / -type d -size +20 -print


NOTE: The size argument to the find command is in 512-byte blocks.

For all the available filesystem types, removing files from a directory does not make that directory smaller. When a file is removed from a directory, the space is left in the directory and is available for new files added to the directory.

For example, when a file is removed from a directory in an s5 filesystem, the inode (file header) number is cleared. This leaves an unused slot that can be reused; over time the number of empty slots might become large. For example, if you have a directory on an s5 filesystem with 100 files in it and you remove the first 99 files, the directory still contains 99 empty slots, at 16 bytes per slot, preceding the active slot. Unless a directory is reorganized on the disk, it will retain the largest size it has ever achieved.

Note that some filesystem types, such as ufs and sfs, support dynamic shrinking of a directory when new files are created in it. However, as free directory data blocks are not coalesced and a directory shrinks up to the block containing the last useful file entry, the same problem (the retention of the largest-ever size) exists if the last useful file entry happens to be in a latter data block.

Locating and deleting inactive files

You can reduce directory size by locating inactive files, backing them up, and then deleting them.

To locate and delete files:

  1. Use the find command to locate inactive files.

    Example:

       find / -mtime +90 -atime +90 -print > files
    
    where files contains the names of files neither written to nor accessed within a specified time period, here 90 days (``+90'').

  2. Notify users that files will be deleted; give enough warning for the users to save or delete their files.

  3. Delete the inactive files.

Reorganizing a single directory

Before you begin

Before you reorganize a directory, use the ``Locating and deleting inactive files'' procedure to remove files that are no longer useful.

To reorganize a single directory:

  1. Move the current directory to another temporary directory.

    Example:

       mv /home/bob /home/obob
    

  2. Create or make the new directory.

    Example:

       mkdir /home/bob
    

  3. In the old directory, use the find and cpio commands to copy the files into the new directory.

    Example:

       cd /home/obob
       find . -print | cpio -plm ../bob
    

  4. Remove the temporary directory.

    Example:

       cd ..
       rm -rf obob
    

Changing the maximum file size

If you install an application such as a database program that creates very large files, you may need to increase the maximum file size that the system can handle.

The maximum file size for the system is determined by the parameters SFSZLIM and HFSZLIM. These parameters are described in ``Tunable parameters''.

To increase the maximum file size:

  1. Edit the values of the parameters SFSZLIM and HFSZLIM.

    Make the two values identical, unless you have a good reason to do otherwise. HFSZLIM must not be less than SFSZLIM.

    Example: To change the maximum file size to 1000000 bytes (10MB) change the values of HFSZLIM and SFSZLIM to 0xA00000 (the 0x denotes hexadecimal).

  2. Rebuild the operating system.

Reorganizing a filesystem


NOTE: If you have only one disk and you've accepted all the default values during installation, ignore this information. If you have more than one disk and you are running a heavily used filesystem, the following information might be useful.

A file consists of multiple disk blocks, which may or may not be contiguous. Files that consist of contiguous disk blocks can be accessed more efficiently than those that aren't. A heavily used filesystem composed of noncontiguous disk blocks might produce performance problems. You can make your filesystem more efficient by rearranging the files to make the constituent blocks contiguous, which also has the effect of shrinking your directories. You cannot reorganize the root filesystem.

The following sections describe two methods for improving performance by reorganizing files. Specifically, it explains how to

Reorganizing an sfs, ufs, or vxfs filesystem


NOTE: The following procedure is provided as a way to clean up a severely fragmented and disorganized filesystem. Because this procedure is cumbersome, we don't recommend using it unless your filesystem is causing severe performance problems.

To reorganize any type of filesystem except an s5 filesystem:

  1. Make sure the filesystem you want to clean up is mounted and is not being used by any users or processes.

  2. Back up the filesystem to a spare disk or cartridge tape, or any other available medium. Use the cpio(1) or tar(1) command.

  3. Unmount the filesystem.

  4. To create a filesystem identical to the original, get the syntax of the mkfs command line used to create the original filesystem, and run it again. The output of the mkfs command with the -m option is the original command syntax. By evaluating it (with eval), you redirect this output such that the command is executed again. Use the eval routine:
       eval `mkfs -F file_sys_type -m device`
    

    Example:

       eval `mkfs -F s5 -m /dev/dsk/c0b0t0d0sc`
    
    

  5. Mount the new (empty) filesystem.

  6. Restore the contents from the backup copy.

Reorganizing an s5 filesystem

To reorganize an s5 filesystem:

  1. Locate a spare disk slice to which you can copy the filesystem.

  2. Unmount both filesystems.

  3. Run dcopy to reorganize the filesystem.

    The first argument is the name of the filesystem you are reorganizing. It should be a character device. The second argument is the name of the spare disk slice.

    Example:

       /usr/sbin/dcopy -F s5 /dev/rdsk/c0b0t0d0sc /dev/dsk/c0b0t0d0s4
    


    NOTE: dcopy normally catches interrupts and quit signals and reports its progress. To kill dcopy, send it a quit signal followed by an interrupt. See dcopy(1M).

Selecting a filesystem type

The choice of filesystem type can affect the performance of your system. The default filesystem type provided during installation is the VERITAS filesystem (vxfs) with a logical block size of 1K (1024 bytes) for filesystems up to 8GB. For most applications, this should provide the best balance of performance and reliability because vxfs offers speedy system boot and shutdown and fast recovery from system outages such as power failures. However, some applications may perform better using other filesystem types. For detailed information about the vxfs filesystem type, see ``The vxfs filesystem type''.

If you want to change the filesystem type for an existing filesystem, the procedure is the same as for reorganizing a filesystem: backup the filesystem and then remake it.

Depending on the average size of the files, you might also want to change either the logical block size or the filesystem type of the filesystem. vxfs uses logical block sizes of 2K (2048 bytes), 4K (4096 bytes), and 8K (8192 bytes), in addition to the default size of 1024-byte blocks. Other filesystem types that can be selected include: s5, sfs and ufs.

There are three logical block sizes for s5 filesystems: 512 bytes, 1K (1024 bytes), and 2K (2048 bytes). The ufs and sfs filesystems also have three logical block sizes: 2K (2048 bytes), 4K (4096 bytes), and 8K (8192 bytes). Each has its advantages and disadvantages in terms of performance.

Selecting a logical block size for a vxfs filesystem

vxfs allocates storage in extents that are collections of one or more blocks, so there are no fragments with vxfs. Because vxfs does allocation and I/O in multiple-block extents, keeping the logical block size as small as possible increases performance and reduces wasted space for most workloads. For the most efficient space utilization, best performance, and least fragmentation, use the smallest block size available on the system. The smallest block size available is 1K, which is the default block size for vxfs filesystems created on the system.

For a vxfs filesystem, select a logical block size of 1K, 2K, 4K, or 8K bytes; the default is 1024-byte blocks for a filesystem smaller than 8GB.

Selecting a logical block size for a sfs, s5, or ufs filesystem

Generally, you will get the best possible performance (system throughput) from sfs, s5, and ufs filesystems if the logical block size is the same as the page size. The system kernel uses the logical block size when reading and writing files. For example, if the logical block size of the filesystem is 4K, whenever I/O is done between a file and memory, 4K chunks of the file are read into or out of memory. The ufs and sfs filesystems provide the option to specify fragment size, too; the s5 filesystem does not provide this feature.

A large logical block size improves disk I/O performance by reducing seek time, and also decreases CPU I/O overhead. On the other hand, if the logical block size is too large, then disk space is wasted. The space is lost because even if only a portion of a block is needed the entire block is allocated. For example, if files are stored in 1K (1024 bytes) logical blocks, then a 24-byte file wastes 1000 bytes. If the same 24-byte file is stored on a filesystem with a 2K (2048 bytes) logical block size, then 2024 bytes are wasted. However, if most files on the filesystem are very large this waste is reduced considerably.

For a filesystem with mostly small files, the small logical block sizes (512 byte and 1K) available for the s5 filesystem have the advantage of less wasted space on disk. However, CPU overhead might be increased for files larger than the block size. Similarly, for sfs and ufs filesystems, when there are mostly small files, small fragment sizes have the advantage of less wasted space on disk.

The sar command with the -u option can help determine if large I/O transfers are slowing the system down. See ``Checking CPU use with sar -u''.

For an sfs or ufs filesystem, select a 2K, 4K, or 8K block size. The 4K block size provides distinctly better performance than the 2K or 8K block size on a machine with a 4K page size.

For an sfs or ufs filesystem, you can choose a fragment size, also. This size can be any power of two between 512 and the block size. The number of fragments per logical block must not be larger than 8. Using fragments is not worthwhile on an sfs or ufs 2K (block size) filesystem because the amount of space saved is less than the 10 percent that would be reserved to prevent excessive fragmentation.

For an s5 filesystem, the default is 1024-byte blocks. You can select a 512-byte, 1K, or 2K block size. A 2K block size provides the best performance for an s5 filesystem on a machine with a page size equal to or greater than 2K. There are no fragments defined for an s5 filesystem.


Next topic: Monitoring system performance
Previous topic: Identifying heavily loaded system resources

© 2004 The SCO Group, Inc. All rights reserved.
UnixWare 7 Release 7.1.4 - 22 April 2004