On Sun, Feb 28, 2010 at 10:23 PM, Terry <td3201@xxxxxxxxx> wrote: > On Sun, Feb 28, 2010 at 7:12 PM, John R Pierce <pierce@xxxxxxxxxxxx> wrote: >> Terry wrote: >>> >>> One more question. This is a pretty decent sized table. It is >>> estimated to be 19,038,200 rows. That said, should I see results >>> immediately pouring into the destination table while this is running? >>> >> >> SQL transactions are atomic. you wont' see anything in the 'new' table >> until the INSERT finishes committing, then you'll see it all at once. >> >> you will see a fair amount of disk write activity while its running. 20M >> rows will take a while to run the first time, and probably a fair amount of >> memory. > > This is working very well. The initial load worked great. Took a > little while but fine after that. I am using this: > INSERT INTO client_logs SELECT * FROM clients_event_log as t1 where > t1.ev_id > (select max(t.ev_id) from client_logs as t); > > However, I got lost in this little problem and overlooked another. I > need to convert the unix time in the ev_time column to a timestamp. I > have the idea with this little bit but not sure how to integrate it > nicely: > select timestamptz 'epoch' + 1267417261 * interval '1 second' > I love overcomplicating things: SELECT *,to_timestamp(ev_time) FROM clients_event_log as t1 where t1.ev_id > (select max(t.ev_id) from client_logs as t) -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general