On 1/11/22 01:34, Dominique Devienne wrote:
On Mon, Jan 10, 2022 at 11:13 PM Adrian Klaver
<adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
I go into use cases in my previous message, send a minute ago.
In your OP the error occurred here:
DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)" CASCADE
The CASCADE allows to command to recurse to objects that may not be
owned by the specified role. Is that what you want?
Sure. Although that's irrelevant. There are no such other objects.
Except we did not know that until now.
Each 2+N schemas "instance" is self-contained, and all ROLEs associated
to those SCHEMAs only have privileges associated to those schemas.
Actual LOGIN USERs are granted access to a subset of ROLEs in
"instances" based on individual permissions.
So when I nuke an "instance", I DROP all its SCHEMAs, and all their
associated ROLEs.
Actually you do DROP OWNED BY ... CASCADE.
And DROP'ing those "instance"-specific ROLEs implicitly severs the
GRANTs made on LOGIN USERs, who remain.
What is the purpose of the process, clearing out given schema, dropping
objects only owned by a given role, or something else?
"dropping objects only owned by a given role" indeed.
I.e. exactly what DROP OWNED BY is designed to do, no?
As pointed out it can end up dropping objects owned by other roles due
to dependencies. This means DROP OWNED BY ... CASCADE can extend past
'dropping objects only owned by a given role'.
It just so happens that it seems impossible to do that, because that
involves too many locks :(.
It is not impossible, just difficult to predict what to set
max_locks_per_transaction to?
With 1400 relations for each of the N schemas, and actual in-prod
instances at client sites
that require hundreds such schemas (on the current system not based on
PostgreSQL), I'm guessing
So now we get to the crux of the issue, this is a migration from another
system. It would be useful to know what that system is and how it is
handled there. There may be people on this list that have similar
experience's.
I can't assign 1400 * 1000 = over a million locks on the cluster, can I?
So now the question I asked in that other message, is whether I should
use a dedicated DB per "instance" instead? --DD
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx