Sales Statistics Batch Job Speeded up by 86 %
Cordes und Graefe KG in Bremen, Germany, was in a batch job creating different levels of sales statistics based on more than 30 million order line records. To create the statistics, additional information had to be fetched from other data bases (customer data, rebates, etc.) - a total of five other files were accessed by key for every record. The job ran with a low CPU percentage for around 5 hours on an iSeries server with abundant CPU, main storage, and disk capacity.
Needing to run this job frequently it was desirable to speed it up. The first step was to pinpoint exactly what was using most of the run time. A low CPU percentage (when CPU resources are plentiful) means that the job is waiting for something most of the time.
Using GiAPA (Global i Applications Performance Analyzer) from iPerformance to analyze the job it was easy to see the reason for the low CPU percentage: The job was most of the time waiting for the completion of physical disk I/Os because of millions of synchronous data base reads. GiAPA also showed that the time was used by QDBGETKY (Read a record by key), and showed the programs and the source statement numbers doing these reads. Also the files names involved could be seen, but they were of course known.
The random access to several large data base files did not allow the operating system expert cache to make the records needed available in advance, and the files were too large to be kept in main storage.
However, only very few fields were needed from the files read randomly, and iPerformance therefore suggested reading each of these files at job start, using sequential blocked access, and loading the key fields and the few bytes of data needed into user indexes.
All the random data base accesses could then be replaced by index search operations, and the indexes would not be bigger than it should be possible for storage management to keep them in main storage.
The strategy proved to be successful: The new version of the program only uses around 40 minutes elapsed time, and the total CPU time used also decreased, although the job CPU percentage is very high - the job does not need to wait for data being fetched from disk. But running with the low batch job priority, this never disturbs any interactive jobs.