On 11/17/10 02:55, Josh Berkus wrote:
If you do wish to have the data tossed out for no good reason every so
often, then there ought to be a separate attribute to control that. I'm
really having trouble seeing how such behavior would be desirable enough
to ever have the server do it for you, on its terms rather than yours.
I don't quite follow you. The purpose of unlogged tables is for data
which is disposable in the event of downtime; the classic example is the
a user_session_status table. In the event of a restart, all user
sessions are going to be invalid anyway.
Depends on what you mean by "session".
Typical web application session data, e.g. for PHP applications which
are deployed in *huge* numbers resides directly on file systems, and are
not guarded by anything (not even fsyncs). On operating system crash
(and I do mean when the whole machine and the OS go down), the most that
can happen is that some of those session files get garbled or missing -
all the others work perfectly fine when the server is brought back again
and the users can continue to work within their sessions. -- *That* is
useful session behaviour and it is also useful for logs.
The definition of unlogged tables which are deliberately being emptied
for no good reason does not seem very useful to me. I'd rather support a
(optional) mode (if it can be implemented) in which PostgreSQL scans
through these unlogged tables on startup and discards any pages whose
checkums don't match, but accepts all others as "good enough". Even
better: maybe not all pages need to be scanned, only the last few, if
there is a chance for any kind of mechanism which can act as checkpoints
for data validity.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general