Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 25, 2019 at 11:13 AM Gunther Schadow <gunther@xxxxxxxxxxxxxxxxx> wrote:

Anyway, I think the partitioned table is the right and brilliant solution, because an index really isn't required. The actual pending partition will always remain quite small, and being a queue, it doesn't even matter how big it might grow, as long as new rows are inserted at the end and not in the middle of the data file and still there be some way of fast skip over the part of the dead rows at the beginning that have already been processed and moved away. 

Why do you want to do that?  If you are trying to force the queue to be handled in a "fair" order, then this isn't the way to do it, you would want to add an "ORDER BY" to your dequeueing query (in which case you are probably back to adding an index).  

Once the space in the beginning of the table has been reclaimed as free, then it will be reused for newly inserted tuples.  After the space is freed up but before it is reused, the seq scan can't skip those blocks entirely, but it can deal with the blocks quickly because they are empty.  If the blocks are full of dead but not freed tuples (because the long-running transactions are preventing them from being cleaned up) then it will have to go through each dead tuple to satisfy itself that it actually is dead.  This might not be as bad as it is for indexes, but certainly won't be good for performance.

 Cheers,

Jeff

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux