Stefan -- ----- Original Message ----- > From: Stefan Keller <sfkeller@xxxxxxxxx> > To: Ivan Voras <ivoras@xxxxxxxxxxx> > Cc: pgsql-performance@xxxxxxxxxxxxxx > Sent: Monday, October 1, 2012 5:15 PM > Subject: Re: Inserts in 'big' table slowing down the database > > Sorry for the delay. I had to sort out the problem (among other things). > > It's mainly about swapping. > > The table nodes contains about 2^31 entries and occupies about 80GB on > disk space plus index. > If one would store the geom values in a big array (where id is the > array index) it would only make up about 16GB, which means that the > ids are dense (with few deletes). > Then updates come in every hour as bulk insert statements with entries > having ids in sorted manner. > Now PG becomes slower and slower! > CLUSTER could help - but obviously this operation needs a table lock. > And if this operation takes longer than an hour, it delays the next > update. > > Any ideas? Partitioning? pg_reorg if you have the space might be useful in doing a cluster-like action: <http://reorg.projects.postgresql.org/> Haven't followed the thread so I hope this isn't redundant. Partitioning might work if you can create clusters that are bigger than 1 hour -- too many partitions doesn't help. Greg Williamson -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance