Search Postgresql Archives

Re: Temporal Databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, but actually I'm not concerned about logging old values. I'm concerned about checking temporal constraints. Entity Integrity (PK) and Referential Integrity (FK).



For example, if you have the salary table:



Salary (employee_id, salary, start_date, end_date)



Where [star_date, end_date] is an interval. Means that the salary is (was) valid in that period of time.



I have to avoid this occurrence:



     001
     1000
     2005-20-01
     2005-20-12

     001
     2000
     2005-20-06
     2006-20-04




So, is needed to compare intervals, not only atomic values. If you want to know which was the salary on 2005-25-07, is not possible. It is inconsistent!!!



Of course I can develop some functions and triggers to accomplish this work. But the idea is to keep simple for the developers, just simple as declare a primary key!



Thanks for your attention!!



----- Original Message ----- From: "Brad Nicholson" <bnichols@xxxxxxxxxxxxxxx>
To: "Simon Riggs" <simon@xxxxxxxxxxxxxxx>
Cc: "Rodrigo Sakai" <rodrigo.sakai@xxxxxxxxxxx>; "Michael Glaesemann" <grzm@xxxxxxxxxxxxx>; <pgsql-general@xxxxxxxxxxxxxx>
Sent: Friday, February 24, 2006 1:56 PM
Subject: Re: [GENERAL] Temporal Databases


Simon Riggs wrote:

A much easier way is to start a serialized transaction every 10 minutes
and leave the transaction idle-in-transaction. If you decide you really
need to you can start requesting data through that transaction, since it
can "see back in time" and you already know what the snapshot time is
(if you record it). As time moves on you abort and start new
transactions... but be careful that this can effect performance in other
ways.



We're currently prototyping a system (still very much in it's infancy) that uses the Slony-I shipping mechanism to build an off line temporal system for point in time reporting purposes. The idea being that the log shipping files will contain only the committed inserts, updates and deletes. Those log files are then applied to an off line system which has a trigger defined on each table that re-write the statements, based on the type of statement, into a temporally sensitive format.

If you want to get an exact point in time snapshot with this approach, you are going to have to have timestamps on all table in your source database that contain the exact time of the statement table. Otherwise, a best guess (based on the time the slony sync was generated) is the closest that you will be able to come.

--
Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada Corp.






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux