I have a postgresql database that I'm using for logging of data.
There's basically one table where each row is a line from my log files.
It's getting to a size where it's running very slow though. There are
about 10 million log lines per day and I keep 30 days of data in it.
All the columns I filter on are indexed (mostly I just use date). And I
tend to pull one day of data at a time with grouped counts by 1 or 2
other columns. There also tends to be only 1 or 2 of these large
queries running at any given time, so a lot of resources can be thrown
at each one.
I'm wondering what my resource parameters should be for optimal speed of
the selects on this database, since I haven't seen a good example where
someone has done anything like this.
The machine is an 8 core opteron (I know I won't really use those, but
Dell threw in the 2nd proc for free) with 8 Gb RAM. The database is on
a RAID 10 JFS partition.
This is what I have in postgresql.conf right now..
shared_buffers = 64MB
work_mem = 128MB
maintenance_work_mem = 256MB
max_fsm_pages = 614400
max_fsm_relations = 10000
Can anyone give me some insight as to what I should set these to or if
there are others I should be using that I'm missing?
Thanks,
Alex
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general