On 2021-11-16 13:56:51 +0530, Sanjay Minni wrote: > I need to keep a copy of old data as the rows are changed. > > For a general RDBMS I could think of keeping all the data in the same table > with a flag to indicate older copies of updated / deleted rows or keep a > parallel table and copy these rows into the parallel data under program / > trigger control. Each has its pros and cons. > > In Postgres would i have to follow the same methods or are there any features / > packages available ? There is an extension (I think it's this one: https://pgxn.org/dist/temporal_tables/) which we've used for one project so far. I'm not sure whether I would use it for further projects: It does what it's supposed to do and it's nice not having to reinvent the wheel. But it did cause some problems during a posgresql upgrade (nothing insurmountable, just a bit of a hassle) and IIRC also with Django migration scripts. So, given that it's easy to roll your own I probably would (but I do suffer a bit from NIH syndrome in general, so that's not surprising). I would strongly advise to use triggers. In my experience implementing that in the application logic is a recipe for inconsistencies. Someone will always forget to duplicate a row or set the wrong timestamp or whatever (BTDT). I don't have a strong preference between just using valid_from/valid_to fields in the same table(s) and using separate history tables. Both work, one or the other may be slightly more convenient depending on the use case. Since you mentioned that the purpose is auditing, I'd probably lean towards separate tables in your case. hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | hjp@xxxxxx | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
Attachment:
signature.asc
Description: PGP signature