DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
Asynchronous I/O

Advantages of asynchronous I/O

The Asynchronous Input/Output (I/O) mechanism provides a user process with the ability to overlap processing with I/O operations. In real time and transaction processing environments, applications may need to overlap their compute and I/O processing to improve the throughput and determinism on a per process or application basis. You can use Asynchronous I/O to enable read-ahead and write-behind to be performed in a controlled fashion. One process can have many I/O operations in progress while it is executing other code. Conversely, with synchronous I/O, the process would have been blocked waiting for each I/O operation.

Previously in UNIX System V, a call to the read system call was considered to be logically synchronous because the call could not return to the user until the requested data was read into the specified buffer in the calling process' address space. The user process was blocked until the data had been placed into the user's buffer and could not execute other instructions while the I/O was taking place. Non-blocking I/O could only be performed if the O_NONBLOCK flag was set on a call to open or fcntl. A non-blocking read call such as a read from a pipe, returns immediately with a failure indication if there is no data available. In contrast, a successful call to begin an asynchronous read returns to the calling process immediately after queuing a read request to the open file descriptor so that when data becomes available, the read request would be satisfied.

A call to the write system call is considered to be logically synchronous because the call does not return to the user until the requested data is written into the file system cache. After returning from the call, the user is free to reuse its buffers. However, the data is actually written to disk asynchronously at a later time. If the caller has set the O_SYNC flag on the call to open or fcntl, the call to write is truly synchronous and does not return to the user until the requested data is written out to disk. In contrast, a successful call to begin an asynchronous write queues a write request and returns immediately, without waiting for the I/O to be completed. When returning from the call, the data is not copied from the user buffers so the caller should not reuse the buffers until the I/O has completed.

An important part of the Asynchronous I/O application is the ability to request asynchronous notification of I/O completion. At the time the Asynchronous I/O request is made, the application may also request later notification on completion of the I/O.

An asynchronous read enables an application to control the amount of read-ahead that is performed so that the data can already be available when the database application needs it. With database and transaction processing software, often the writing out of data can be done asynchronously although you may need to know when the I/O has completed. The notification mechanism provided in UNIX System V fulfills this need.


NOTE: ``How to use asynchronous I/O with your application'' describes how an application may use the interfaces specified in this document.

Performance

The Asynchronous I/O feature enhances performance by allowing applications to overlap processing with I/O operations. Using Asynchronous I/O enables an application to have more CPU time available to perform other processing while the I/O is taking place.

When read and write system calls return, the data is copied into or out of the user buffer, respectively. This usually requires copying data between user and kernel spaces, and may possibly also require real I/O. If only the copy operation is required, the user process must wait, and if I/O is necessary, the process will also be blocked. For applications such as real time and transaction processing, this waiting and blocking time may be so significant that there may be an application that supports concurrent execution of I/O and other work in the application.

With the Asynchronous I/O capability, an application can submit an I/O request without waiting for its completion, and perform other CPU work either until it is asynchronously notified of the completion of the I/O operation, or until it wishes to poll for the completion. For applications that involve large amounts of I/O data, this CPU and I/O overlapping can offer significant improvement on throughput.


Next topic: Using asynchronous I/O
Previous topic: Notation conventions

© 2004 The SCO Group, Inc. All rights reserved.
UnixWare 7 Release 7.1.4 - 27 April 2004