Dear Adrian!
Again. As I see the beginning blocks are removed by mailing system in the code.
We have an "ourlocks" table which hold records (TableName, RecordID, SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).
If anybody wants to lock record "for long time", "over the transactions" it try to insert a new record here.
If other process want to lock same record, it can see this record (or got violation error), so it stopped.
This is not for protect all tables, only for protect main entities have many subtables like "Products", "Offers", etc.
We can't use transactions, because in the editor they must post/commit subdata.
And because PG is different from other DBs, so if ANY of statements failed, it rollback whole thing automatically (see my prior mail).
In FireBird we can do simple record lock protection with main connection's second transaction, but in MS and PG not.
So we used real records in a real table. But how to clean if client disconnected without delete own records?
For this we created own sessioninfo table with inner id, user id, timestamp, and [connectionid, connectiontime].
The locking mechanism checks for same lock (Offer, 117), if anybody locks the record, it checks for he's on or not.
If active connection (ID + TS) then lock is valid, and we can show information that "who is editing, please ask for him to release, etc.".
If not, we can eliminate the record and insert our.
The main goal is to protect the main entities. It is working in MS.
My question was about how to get my client's connection timestamp as get_backend_pid.
But as you wrote I can get it from activity log. Because PID can't be same as mine, I can select my from the table.
You said it have danger (like guid repetition). Yes, it have. And not the BAD SYSADMIN, but the summer/winter time changes are dangerous (the backward version). In PG we can extend our "guid" with IP and Port too, and this could be enough safe for us.
Thanks
2016-04-09 16:05 GMT+02:00 Adrian Klaver <adrian.klaver@xxxxxxxxxxx>:
On 04/09/2016 01:30 AM, Durumdara wrote:
Dear Everybody!
In MS we had a "persistent lock" structure and method.
This over transactions because based on real records in a real table
with pri key (tablename + id).
For garbaging we had a special session info.
In MS the session id is smallint, so it can repeats after server
restarts, but my coll. found a "session creation timestamp".
This is a key which unique.
With this we can check for died sessions and we can clean their records.
It might help to explain more what it is you are trying to achieve.
First I am not sure what you mean by 'persistent lock', especially as it applies to Postgres?
Second, I assume by garbaging you mean garbage collection of something?
If that is the case what exactly are you garbage collecting?
I see 'clean records', what records would these be?
In particular, on Postgres, where are you going to do this?
We want create same mechanism.
If the above questions did not already cover this, what mechanism?--
I know there are adv. locks in PG, but I want to use session id.
This could be:
|pg_backend_pid|()
May pid repeats.
Where I can get timestamp or some other unique data with I can create a
combined primary key?
Thanks for your help!
dd
Adrian Klaver
adrian.klaver@xxxxxxxxxxx