2009/12/1 Rüdiger Sörensen <r.soerensen@xxxxxxx>: > dear all, > > I am building a database that will be really huge and grow rapidly. It holds > data from satellite observations. Data is imported via a java application. > The import is organized via files, that are parsed by the application; each > file hods the data of one orbit of the satellite. > One of the tables will grow by about 40,000 rows per orbit, there are > roughly 13 orbits a day. The import of one day (13 orbits) into the database > takes 10 minutes at the moment. I will have to import data back to the year > 2000 or even older. > I think that there will be a performance issue when the table under question > grows, so I partitioned it using a timestamp column and one child table per > quarter. Unfortunately, the import of 13 orbits now takes 1 hour instead of > 10 minutes as before. I can live with that, if the import time will not > grow sigificantly as the table grows further. I'm gonna guess you're using rules instead of triggers for partitioning? Switching to triggers is a big help if you've got a large amount of data to import / store. If you need some help on writing the triggers shout back, I had to do this to our stats db this summer and it's been much faster with triggers. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general