Hi, I have a problem with large objects in postgresql 8.1: The performance of loading large objects into a database goes way down after a few days of operation. I have a cron job kicking in twice a day, which generates and loads around 6000 large objects of 3.7MB each. Each night, old data is deleted, so there is never more than 24000 large object in the database. If I start loading on a freshly installed database, load times for this is around 13 minutes, including generating the data to be stored. If I let the database run for a few days, this takes much longer. After one or two days, this goes down to almost an hour, with logs indicating that this extra time is solely spent transferring the large objects from file to database. Turning autovacuum on or off seems to have no effect on this. I have only made the following changes to the default postgresql.conf file: max_fsm_pages = 25000000 vacuum_cost_delay = 10 checkpoint_segments = 256 So, my question for you is: Why does this happen, and what can I do about it? Regards, Vegard Bønes -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance