Greg Stark wrote:
My first reaction to this description was to consider some sort of model where
the master database publishes text dumps of the master database which are
regularly downloaded and loaded on the slaves. The slaves treat those tables
as purely read-only reference tables.
If you need data to propagate from the clients back to the server then things
get more complicated. Even then you could side step a lot of headaches if you
can structure the application in specific ways, such as guaranteeing that the
clients can only insert, never update records.
It's the latter, I'm afraid. The master actually won't be modifying or
inserting any data itself, just publishing it for the client databases
in its domain. Almost all data inserts/updates/deletes will occur on
the leaf nodes, i.e. at the remote health clinics and MMUs (mobile
medical units). What we need to ensure is that if Patient X visits Site
A on Monday that his records are there for a followup visit at Site B on
Tuesday.
Even this has salient problems: for instance, Patient X visits Site B
before Site A has had time to replicate its current data back to the
master and Site B has pulled those updates.
The requirements scream ASP model except that this system needs to be
functional for disaster management where it's likely there won't be any
communications. At least, that's the constraint I've been given.
This may turn out to be an issue of managing client expectations and
some add'l infrastructure investment (i.e. better satellite
communications on the MMUs and satellite backup for the fixed clinics).
We're at the very early head-banging stages of this project now so I
have a fairly optimistic list of requirements to resolve. This is an
open source project though so it would be terrific if we could build it
non-ASP.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match