What are synchronous and asynchronous I/Os?
Physical I/Os are divided into synchronous and asynchronous I/Os.
If a program must wait for the completion of a physical I/O operation it is called a synchronous I/O. A good example is random read by key of a record. After issuing such a read instruction, the program must wait for database management to deliver the wanted record, unless the records happens to have been used recently and therefore still is in main memory.
If program process can continue while the operating system handles the I/O then it is called an asynchronous I/O, preferable from a performance point of view because they do not delay the job. A typical example is a write instruction for a record. Database management receives the record, and control is immediately passed back to the program, which can continue running while the operating system as soon as possible asynchronously transfers the record to the disk.
When a file is read sequentially, the operating system can anticipate the coming read requests from the program and read ahead asynchronously. This can speed up a job very significantly.
Blocking of physical I/Os will, when possible, speed a job up very significantly. The operating system will attempt to use record blocking for physical I/Os e.g. when a file is read in “arrival sequence”, i.e. in the sequence in which the records are stored on the disk. Physical blocking may be forced by the NBRRCDS keyword of the OVRDBF parameter.
One disk access is (very roughly) one million times slower than one CPU cycle. Waits for (synchronous) disk I/Os are the most common reason for delays of any commercial application, interactive and batch. These waits also explain why a computer can have many jobs running actively simultaneously and still not use 100 % of the CPU.