Search Postgresql Archives

Re: table as log (multiple writers and readers)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Craig Ringer wrote:
[snip]
If you really want to make somebody cry, I guess you could do it with dblink - connect back to your own database from dblink and use a short transaction to commit a log record, using table-based (rather than sequence) ID generation to ensure that records were inserted in ID order. That'd restrict the "critical section" in which your various transactions were unable to run concurrently to a much shorter period, but would result in a log message being saved even if the transaction later aborted. It'd also be eye-bleedingly horrible, to the point where even the "send a message from a C function" approach would be nicer.

This will not work for the problem the TS has. Let a single transaction hang for a long enough time before commit, while others succeed. It will keep ordering of changes, but commits might come unordered.

The issue is, you don't really have the critical section as you describe, there is no SINGLE lock you are 'fighting' for.

It will work with an added table write lock (or up), that will be the lock for your critical section.

In my opinion I would just forget about this one rather quickly as you more or less proposed...

- Joris


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux