Search Postgresql Archives

Re: Tuning for a tiny database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Smith wrote:
> On 06/21/2011 01:49 AM, CSS wrote:
>> Some raw numbers: We're only looking at a total of about six tables in
>> one db.  In total there are going to be well under 10,000 records in ALL
>> tables.  That might increase to at most 100,000 in the next few years.
>> Our raw DNS queries/second tops out around 50 qps over three distinct
>> servers.  Keeping in mind that PowerDNS is doing heavy caching, we
>> should never really see more than a few db queries per second.
> 
> I doubt you really need to do any tuning for this scenario.  I would set
> shared_buffers to a modest value--maybe 256MB--and stop further tuning
> until there's some evidence it's necessary.

Sounds good.  I'm at... wait for it... 8MB now.  This seems to work.
Since I originally posted this I've been benchmarking powerdns and I've
found that with it only hitting one core I can do about 50,000
queries/second.  During that time, Postgres is not even a blip.  I think
with the heavy caching powerdns does, it just doesn't really hit
postgres until it sees something in the cache has reached the max TTL.

In short, heavy DNS query traffic is not in turn causing heavy DNS
traffic.  This might change with someone pummeling it with queries for
domains or hosts that don't exist, but I believe it handles negative
caching of records as well.

> If presented with the same problem but with the much harder twist "I
> need to support >10,000 queries/second", I would recommend:
> 
> -Populate a prototype with a representative amount of data
> -Measure the database size
> -Set shared_buffers to that

Thank you.  I'm fairly green with Postgres, so this is very helpful.

> -Whenever the database is restarted, construct a series of queries that
> forces all the data used regularly into the database's cache
> -Use pg_buffercache to confirm what's in there is what you expect

I need to read up on that module, but it looks quite intriguing.

> Getting all the data into cache is sometimes harder than expected.  Some
> optimizations in PostgreSQL keep it from caching large amount of tables
> when you do a sequential scan of the contents, as one example that
> complicates things.  But if you get to where this is necessary, building
> such a tool isn't difficult, and there are some projects out there that
> address this particular need:  filling the cache back up with relevant
> data after restart.  This is the main one:
> 
> http://pgfoundry.org/projects/pgfincore/
> http://www.pgcon.org/2010/schedule/events/261.en.html

Thanks again, I appreciate all your input on what's currently looking
like a non-problem.  But there are other places I can certainly use this
info.

Charles


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux