On 3/22/20 2:53 PM, pabloa98 wrote:
So the question may actually be:
How do we improve our locking code, so we don't have to spawn millions
of sequences?
What is the locking method you are using?
I am not using locking with the million sequence solution. I do not want
something that locks because the problems described below
I prefer the solution generates a gap (skip a couple of numbers) and not
using locks.
> The lock part is because we solved a similar problem with a
counter by
> row locking the counter and increasing it in another part of the
> database. The result is that all the queries using that table are
queued
> by pair (group, element) that is not that bad because we are not
> inserting thousands of rows by second. Still is killing cluster
> performance (but performance is still OK from the business point of
> view). The problem using locks is that they are too sensitive to
> developer errors and bugs. Sometimes connected clients aborts and
the
> connection is returned to the pool with the lock active until the
> connection is closed or someone unlocks the row. I would prefer
to have
> something more resilient to developers/programming errors, if
possible.
>
Now I read this paragraph, I realize I was not clear enough.
I am saying we do not want to use locks because of all the problems
described.
And what I was asking is what locking where you doing?
And it might be better to ask the list how to solve those problems, then
to create a whole new set of problems by using millions of sequences.
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx