On 12/2/2005 6:01 PM, Michael Riess wrote:
Hi,
thanks for your comments so far - I appreciate it. I'd like to narrow
down my problem a bit:
As I said in the other thread, I estimate that only 20% of the 15,000
tables are accessed regularly. So I don't think that vacuuming or the
number of file handles is a problem. Have a look at this:
What makes you think that? Have you at least tried to adjust your shared
buffers, freespace map settings and background writer options to values
that match your DB? How does increasing the kernel file desctriptor
limit (try the current limit times 5 or 10) affect your performance?
Jan
content2=# select relpages, relname from pg_class order by relpages desc
limit 20;
relpages | relname
----------+---------------------------------
11867 | pg_attribute
10893 | pg_attribute_relid_attnam_index
3719 | pg_class_relname_nsp_index
3310 | wsobjects_types
3103 | pg_class
2933 | wsobjects_types_fields
2903 | wsod_133143
2719 | pg_attribute_relid_attnum_index
2712 | wsod_109727
2666 | pg_toast_98845
2601 | pg_toast_9139566
1876 | wsod_32168
1837 | pg_toast_138780
1678 | pg_toast_101427
1409 | wsobjects_types_fields_idx
1088 | wso_log
943 | pg_depend
797 | pg_depend_depender_index
737 | wsod_3100
716 | wp_hp_zen
I don't think that postgres was designed for a situation like this,
where a system table that should be fairly small (pg_attribute) is this
large.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@xxxxxxxxx #