On 9/2/16 7:39 PM, dandl wrote:
I don't think this is quite true. The mechanism he proposes has a small window in which committed transactions can be lost, and this should be addressed by replication or by a small amount of UPC (a few seconds).
Except that's the entire point where all those kind of solutions *completely* depart ways from Postgres. Postgres is designed to *lose absolutely no data after a COMMIT*, potentially including requiring that data to be synchronized out to a second server. That is worlds apart from "we might lose a few seconds", and there's a lot of stuff Postgres has to worry about to accomplish that. Some of that stuff can be short-circuited if you don't care (that's what SET synchronous_commit = off does), but there's always going to be some amount of extra work to support synchronous_commit = local or remote_*.
Presumably there's more improvements that could be made to Postgres in this area, but if you really don't care about losing seconds worth of data and you need absolutely the best performance possible then maybe Postgres isn't the right choice for you.
"All databases suck, each one just sucks in a different way." - Me, circa 1999.
-- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461 -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general