Tarlika Elisabeth Schmitz wrote:
I have a database that will be populated solely by CSV import.
There are several CSV file formats, all denormalized.
I have created interim tables which match the CSV file formats. An
insert trigger distributes the data to their appropriate destination
tables. The destination tables themselves have insert/update triggers
for automated data clean-up. Any unresolvable inconsistencies are
reported in a log table.
I don't want the triggers to fire for every insert/update. There might
be situations where I have to perform some data clean-up manually.
So, my idea is to create a role for import, query current_user in the
trigger, perform the trigger actions for importuser and just return the
row unadulterated for adminuser.
I would give privileges to the importuser for the tables being
explicitly and implicitly populated.
Is that the best way to organize this?
=====
setup: PostgreSQL 8.4
dbname = schema name = admin name
You seem to be writing denormalized import records for the sole purpose
of writing other normalized records. Have you you looked into writing a
programme in a relatively high-level jdbc-friendly language which reads
the csv file, normalizes the data (the code already in your triggers)
and flushes on every say 1000 independent records? The "clean-up" and
logging might also be done by the import app (all depending on what's
being cleaned up and logged :) )
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general