Hello
all,
one of our customers
is using PostgreSQL with tables containing millions of records. A simple 'SELECT
* FROM <table>' takes way too much time in that case, so we have
advised him to use the LIMIT and OFFSET clauses. However now he has a
concurrency problem. Records deleted, added or updated in one process
have an influence on the OFFSET value of another process such that
records are either skipped of read again.
The solution to that
problem is to use transactions with isolation level serializable.
But to use
transactions around a loop that reads millions of records is far from ideal
I think.
Does anyone have a
suggestion for this problem ? Is there for instance an alternative to
LIMIT/OFFSET so that SELECT on large tables has a good performance
?
Thank you for your
help
Jan van der
Weijde