Search Postgresql Archives

Re: What limits Postgres performance when the whole database lives in cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > I don't think this is quite true. The mechanism he proposes has a
> small window in which committed transactions can be lost, and this
> should be addressed by replication or by a small amount of UPC (a few
> seconds).
> 
> Except that's the entire point where all those kind of solutions
> *completely* depart ways from Postgres. Postgres is designed to *lose
> absolutely no data after a COMMIT*, potentially including requiring
> that data to be synchronized out to a second server. That is worlds
> apart from "we might lose a few seconds", and there's a lot of stuff
> Postgres has to worry about to accomplish that. Some of that stuff can
> be short-circuited if you don't care (that's what SET
> synchronous_commit = off does), but there's always going to be some
> amount of extra work to support synchronous_commit = local or
> remote_*.

I understand that. What I'm trying to get a handle on is the magnitude of that cost and how it influences other parts of the product, specifically for Postgres. If the overhead for perfect durability were (say) 10%, few people would care about the cost. But Stonebraker puts the figure at 2500%! His presentation says that a pure relational in-memory store can beat a row store with disk fully cached in memory by 10x to 25x. [Ditto column stores beat row stores by 10x for complex queries in non-updatable data.]

So my question is not to challenge the Postgres way. It's simply to ask whether there are any known figures that would directly support or refute his claims. Does Postgres really spend 96% of its time in thumb-twiddling once the entire database resides in memory?

> Presumably there's more improvements that could be made to Postgres in
> this area, but if you really don't care about losing seconds worth of
> data and you need absolutely the best performance possible then maybe
> Postgres isn't the right choice for you.

Achieving durability for an in-memory database requires either UPS or active replication or both, which is an additional cost that is not needed for every application. My question precedes that one: is there a big performance gain there for the taking, or is it smoke and mirrors?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux