On Tue, Jul 7, 2009 at 4:58 PM, Jennifer Spencer<jenniferm411@xxxxxxxxxxx> wrote: > Hi - Some answers, questions and comments below > >> So, what class machines are these, and specifically how much ram, what >> kind of RAID controllers, and how many hard drives are you throwing at the >> problem? > We have two identical enterprise Linux machines coming. They are not here > yet (hence my planning concerns). Presently, we have a lesser but still > good enterprise machine with 64gb of RAM, 8 CPUs and ~1 TB of space. The > new ones coming have 5 TB each of RAID disks mirrored across to another 5TB > (20 5.4gb spinning disks). Not sure the RAID level but I think it's level > six. They each have 16 CPUs, and (I believe) 128 Gb of RAM, connected to > our NFS network on a 10g switch (at least I think it's the 10 gigabit switch > - it might be one gigabit). Assuming that your db is mostly read, RAID-6+1 is ok, but see if you can benchmark things with RAID-10 to be sure, as RAID-10 is generally the best choice. Also, get a battery backed cache if you can. >> I wonder, what do you use to decide when to reindex? Under this >> situation, it's very possible that you don't need to do it all that often. >> Are you just flying 'willy-nilly' about reindexing things, or is there some >> indicator you use? > I am hoping to use system statistics for the table(s). In theory, once the > index size gets to be a larger logical fraction of the table size than it > reasonably should be, it's time to consider re-indexing. I thought to use > some combination of pg_stat_user_indexes info and pg_statio_all_indexes, but > I haven't had to do this yet and if you have suggestions, that'd be > helpful. I should be able to run a check of things once/week and cron it to Generally speaking vacuuming should take care of index and table bloat to prevent this. >>Define "maintenance". Maybe you're bringing along a few more bad habits >> from Sybase. > Oh, I am sure that's likely. Heh. I lock out the users, reindex things > that need reindexing, based on use patterns and size, I update all > histograms and statistics for user and system tables, I remove any dead rows > possible > (hopefully all of them), depending on your usage patterns some dead space is actually desirable. > and I make a nice clean tape set when I am done - > before letting anyone back in. I often cycle the server (pg_ctl start/stop) > as well. Sometimes we take that time to power cycle the whole machine > and do any machine maintenance we can after the nice clean backup is done. Good time to fsck the volumes as well. >> If you're running a version prior to 8.4, make sure your Free Space >> Manager settings are adequate. (A VACUUM VERBOSE at the database level will >> report where you're at in that regard in the last few lines.) > I show the following in postgresql.conf (this has not been tuned): > # - Free Space Map - > max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes > each > # (change requires restart) > #max_fsm_relations = 1000 # min 100, ~70 bytes each > # (change requires restart) Both are probably way too low. I'm guessing there's more than 1000 things in your db to vacuum. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin