"John Smith" <sodgodofall@xxxxxxxxx> writes: > I have a pg instance with 700GB of data, almost all of which is in one > table. When I PREPARE and then COMMIT PREPARED a transaction that > reads & writes to a large fraction of that data (about 10%, > effectively randomly chosen rows and so every file in the table is > modified), the COMMIT PREPARED sometimes takes a very long time--2 to > 5 minutes. Is this expected? It's impossible to say without knowing more about what the transaction did. But one piece of data you could check easily is the size of the 2PC state file (look into $PGDATA/pg_twophase/). regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general