On Sunday 06 December 2009 19:20:17 Andreas Thiel wrote: > Hi Andres, > > Thanks a lot for your answers. As bottom line I think the answer is I > have to rethink my DB structure. Can't answer that one without knowing much more ;) > > Could you please properly quote the email? The way you did it is quite > > unreadable because you always have to guess who wrote what. > I try to, is it now getting better? My apologies, still trying to adopt > to using Office 07:-) Better, yes. > Well, I know the data types of my columns sum up to 32 bytes right now > (was about 100 before). As I only see a reduction of relpages/reltuples > by 30% not by a factor 3, I assume that the row overhead kicks in. The > data definition of the new table looks like this: > bigint REFERENCES test_orig(test_id) ON DELETE CASCADE > bigint REFERENCES part_orig(prt_id) ON DELETE CASCADE > smallint > bit(16) > real > text (usually empty in most rows) > smallint > I did calculate 32 Bytes per row (if text is empty), but actually > relpages/reltuples is about ~63 bytes. This would result in a per row > overhead of 31 bytes. Would it change anything if I remove the 2 FOREIGN > KEY constraints? If you remove those columns entirely, sure. If you remove only the constraint, no. The row overhead in 8.3/8.4 is 28bytes afaik. You miss two points in your calculation - one is alignment (i.e. a integer will only start at a 4byte boundary) and the other is that for text you need to store the length of the column as well. > > Its pg_prepared_xacts (note the s), sorry my mind played me. > Nothing inside this table as well. (I did also - while trying to improve > postgresql.conf a few days ago - restart the server a couple of times, I > think that would have removed any hanging transactions or prepares, > shouldn't it?) No, prepared transactions do not get removed by restarting. But thats fine then. > > > > > max_fsm_relations = 4194304 # min 100, ~70 bytes fsm_relations is the max number of relations you want to store in the fsm - currently that means you could have 4 mio tables+indexes. > No, but it seems at least VACUUM is now running fine and no longer > complaining about too small number for max_fsm_pages. Do you think if I > reduce those two numbers, I'll have a better chance to run VACUUM FULL? > Currently max_fsm_pages is slightly larger than relpages of my largest > table. I read somewhere, max_fsm_pages should be about 1/2 of the total > number of relpages in a DB, maybe another way to say it should be larger > than the largest table... The largest table does not really have any special influence on the fsm, so I wouldnt count that rule as very good. Its not that easy to calculate the size of the fsm correctly - thats why its gone in 8.4... I know of several instances running with a larger fsm_pages - you could try to reduce the fsm_relations setting - I dont know if there are problems lurking with such a oversized value. I actually doubt that thats related to the oom youre seeing though - whats your "maintenance_work_mem" setting and whats your /proc/sys/vm/overcommit_ratio and how much swap do you have? Andres -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance