Search Postgresql Archives

Re: Fast data, slow data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 26, 2014 at 5:49 PM, Shaun Thomas <sthomas@xxxxxxxxxxxxxxxx> wrote:
> Then you create a job that runs however often you want, and all that job
> does, is move old rows from my_table, to my_table_stable. Like so:
>
> BEGIN;
> INSERT INTO my_table_stable
> SELECT * FROM ONLY my_table
>  WHERE date_col >= now() - INTERVAL '15 minutes';
> DELETE FROM ONLY my_table
>  WHERE date_col >= now() - INTERVAL '15 minutes';
> COMMIT;

This deserves a caveat, in the default "read committed" isolation
level, this example can delete more rows that it inserts; even if your
date_col never changes and you don't have transactions running for
that long, it's best not to use this pattern. You could change the
isolation using SET TRANSACTION, or much better, use wCTE to solve
this atomically:

WITH deleted AS (
  DELETE FROM ONLY my_table
  WHERE date_col >= now() - INTERVAL '15 minutes'
  RETURNING *
)
INSERT INTO my_table_stable
  SELECT * FROM deleted;

Regards,
Marti



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux