On Fri, Oct 23, 2009 at 2:45 AM, Sydney Puente <sydneypue...@xxxxxxxxx> wrote:
> This data will be accessed a couple of times a second, and I have a cunning
> plan to have a view that points to the initial dataload, and then load up
> the new data into a shadow table, drop the view and then recreate it
> pointing to the shadow table ( which will then no longer be the shadow).
If it is only 100k rows, then within a transaction: 1) delete all
rows, 2) insert all new rows, 3) commit, 4) vacuum.
don't try to compact the table with cluster or vacuum full since
you'll just re-expand it on the next synchronization.
There should be no blocking of your read
access. This assumes your
copy is read-only, which you
imply.
++++++++++++
Ah I see what you mean - thanks very much that is v helpful!
Yes the copy will be read-only.
Will have 3 tables of data, being read (readonly) and in the background
Will have 3 shadow tables populated from an unreliable db over an unreliable network.
not quite sure how I can "insert all the rows" in sql.
have postgres 8.03 BTW.
Syd