dear all,
I am building a database that will be really huge and grow rapidly. It
holds data from satellite observations. Data is imported via a java
application. The import is organized via files, that are parsed by the
application; each file hods the data of one orbit of the satellite.
One of the tables will grow by about 40,000 rows per orbit, there are
roughly 13 orbits a day. The import of one day (13 orbits) into the
database takes 10 minutes at the moment. I will have to import data back
to the year 2000 or even older.
I think that there will be a performance issue when the table under
question grows, so I partitioned it using a timestamp column and one
child table per quarter. Unfortunately, the import of 13 orbits now
takes 1 hour instead of 10 minutes as before. I can live with that, if
the import time will not grow sigificantly as the table grows further.
anybody with comments/advice?
tia,
Ruediger.
begin:vcard
fn;quoted-printable:R=C3=BCdiger S=C3=B6rensen
n;quoted-printable;quoted-printable:S=C3=B6rensen;R=C3=BCdiger
email;internet:r.soerensen@xxxxxxx
version:2.1
end:vcard
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general