Good afternoon. I have developed an application to efficiently schedule chemotherapy patients at our hospital. The application takes into account several resource constraints (available chairs, available nurses, nurse coverage assignment to chairs) as well as the chair time and nursing time required for a regimen. The algorithm for packing appointments in respects each constraint and typically schedules a day of treatments (30-60) within 9-10 seconds on my workstation, down from 27 seconds initially. I would like to get it below 5 seconds if possible. I think what's slowing is down is simply the number of rows and joins. The algorithm creates a scheduling matrix with one row per 5 minute timeslot, per unit, per nurse assigned to the unit. That translates to 3,280 rows for the days I have designed in development (each day can change). To determine the available slots, the algorithm finds the earliest slot that has an available chair and a count of the required concurrent intervals afterwards. So a 60 minute regimen requires 12 concurrent rows. This is accomplished by joining the table on itself. A second query is ran for the same range, but with respect to the nurse time and an available nurse. Finally, those two are joined against each other. Effectively, it is: Select * >From ( Select * From matrix m1, matrix m2 Where m1.xxxxx = m2.xxxxx ) chair, ( Select * From matrix m1, matrix m2 Where m1.xxxxx = m2.xxxxx ) nurse Where chair.id = nurse.id With matrix having 3,280 rows. Ugh. I have tried various indexes and clustering approachs with little success. Any ideas? Thanks, Matthew Hartman Programmer/Analyst Information Management, ICP Kingston General Hospital (613) 549-6666 x4294 -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance