Hi all,
I have a table that has about 73mm rows in it and growing. Running
9.0.x on a server that unfortunately is a little I/O constrained. Some
(maybe) pertinent settings:
default_statistics_target = 50
maintenance_work_mem = 512MB
constraint_exclusion = on
effective_cache_size = 5GB
work_mem = 18MB
wal_buffers = 8MB
checkpoint_segments = 32
shared_buffers = 2GB
The server has 12GB RAM, 4 cores, but is shared with a big webapp
running in Tomcat -- and I only have a RAID1 disk to work on. Woes me...
Anyway, this table is going to continue to grow, and it's used
frequently (Read and Write). From what I read, this table is a
candidate to be partitioned for performance and scalability. I have
tested some scripts to build the "inherits" tables with their
constraints and the trigger/function to perform the work.
Am I doing the right thing by partitioning this? If so, and I can
afford some downtime, is dumping the table via pg_dump and then loading
it back in the best way to do this?
Should I run a cluster or vacuum full after all is done?
Is there a major benefit if I can upgrade to 9.2.x in some way that I
haven't realized?
Finally, if anyone has any comments about my settings listed above that
might help improve performance, I thank you in advance.
-AJ
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance