Search Postgresql Archives

Re: What limits Postgres performance when the whole database lives in cache?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2016-09-11 7:20 GMT+02:00 dandl <david@xxxxxxxx>:
> From: Jim Nasby [mailto:Jim.Nasby@BlueTreble.com]
> My guess is this is a test scenario that completely favors VoltDB
> while hamstringing Postgres, such as using no transaction durability
> at all in VoltDB while using maximum durability in Postgres. Comparing
> the cost of every COMMIT doing an fsync vs not could certainly produce
> a 25x difference. There could be other cases where you'd get a 25x
> difference.

I guess my question then is: how much do you pay for that durability? If you benchmark Postgres configured for pure in-memory usage with absolutely no writes to disk (or SSD or network), where is it spending its time? Is there a lot of overhead in getting data in and out of cache buffers and conversions and in concurrency control?

It is not about durability only.

Postgres holds data in format equal or similar to saved data on persistent storage. There are repeated serialization and deserialization. Some structures are designed to be simply saved (like Btree), but the performance is second target.

I believe so new memory databases can be 10-100x faster - depends on use case, because they hold data primary in memory and uses different data structures. The performance of these databases is great, when all data are well placed in memory all time. But the performance is pretty bad, when this rule is not true. There is another issue - when you increase speed of database write operations, probably you will hit a file system limits, spin lock issues - so it is one reason, why big system are based on distributed systems more and more.

Regards

Pavel

 

As a case study, assume an RBMS is required to monitor and record Internet (or phone or VHF) traffic. If the power goes off the traffic continues, and it really doesn’t matter whether you lose 60 seconds of down time or 63 seconds; in any case another instance in another data centre will pick up the slack. So the requirement is atomicity yes, but not durability. Should you bid Postgres for the job, or look elsewhere? How much slower would Postgres be than a competitor? Do we care? 

> You need to be careful of benchmarks from commercial companies. MySQL
> used to tout how fast it was compared to Postgres, using a benchmark
> it created specifically for that purpose that had very little to do
> with the real world. People eventually discovered that as soon as you
> had a concurrent workload Postgres was actually faster.

Of course; but at the same time insisting on including durability favours Postgres when I'm actually asking about alternatives.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux