On 12/1/2005 2:34 PM, Michael Riess wrote:
VACUUM FULL was probably always overkill, unless "always" includes
versions prior to 7.3...
Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL,
but the database got considerably slower near the end of the week.
This indicates that you have FSM settings that are inadequate for that
many tables and eventually the overall size of your database. Try
setting those to
max_fsm_relations = 80000
max_fsm_pages = (select sum(relpages) / 2 from pg_class)
Another thing you might be suffering from (depending on the rest of your
architecture) is file descriptor limits. Especially if you use some sort
of connection pooling or persistent connections like PHP, you will have
all the backends serving multiple of your logical applications (sets of
30 tables). If on average one backend is called for 50 different apps,
then we are talking 50*30*4=6000 files accessed by that backend. 80/20
rule leaves 1200 files in access per backend, thus 100 active backends
lead to 120,000 open (virtual) file descriptors. Now add to that any
files that a backend would have to open in order to evict an arbitrary
dirty block.
With a large shared buffer pool and little more aggressive background
writer settings, you can avoid mostly that regular backends would have
to evict dirty blocks.
If the kernel settings allow Postgres to keep that many file descriptors
open, you avoid directory lookups.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@xxxxxxxxx #