Re: "not related" code blocks for removal of dead rows when using vacuum and this kills the performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




From: Lars Aksel Opsahl <Lars.Opsahl@xxxxxxxx>
>From: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
>>
>>It is not entirely clear what you are doing, but it seems like you are holding
>>a database transaction open, and yes, then it is expected behavior that
>>VACUUM cannot clean up dead rows in the table.
>>
>>Make sure that your database transactions are short.
>>Don't use table or row locks to synchronize application threads.
>>What you could use to synchronize your application threads are advisory locks,
>>they are not tied to a database transaction.
>>
>
>Hi
>
>The details are here at https://gitlab.com/nibioopensource/resolve-overlap-and-gap/-/issues/67#note_1779300212 and
>here is also a ref. to this test script that shows problem https://gitlab.com/nibioopensource/resolve-overlap-and-gap/uploads/9a0988b50f05386ec9d91d6600bb03ec/test_issue_67.sql
>
>I am not doing any locks I just do plain CRUD operations .
>
>The key is that the master code is not creating any table or insert rows that is done by many short operations as you suggested.
>
>But even if the master code is not doing any operations against the test table it's blocking removal of dead rows.
>
>If this expected behavior, it's means that any long running transactions will block for removal of any dead rows for all visible tables in the database and that seems like problem or weakness of Postgresql.
>
>While writing this I now was thinking maybe I can get around problem by not making the table not visible by the master code but that makes it very complicated for mee.    
>
>Thanks.
>
>Lars

Hi

I now tested running the master (Orchestration) code  as user joe.

In the master code I connect back as user lop and creates the test table test_null and inserts data in many tiny operations.

User joe who has the long running operation does not know anything about table test_null and does not have any grants to that table.

The table test_null is not granted to public either.    

The problem is the same, the long running transaction to joe will kill the performance on a table which user joe does not have any access to or know anything about .

If this is expected behavior it means that any user on the database that writes a long running sql that does not even insert any data can kill performance for any other user in the database.

So applications like QGIS who seems to keep open connections for a while can then also kill the performance for any other user in the data.

Having postgresql working like this also makes it very difficult to debug performance issues because a problem may just have been a side effect of a not related sql.

So I hope this is not the case and that I have done something wrong or that there are some parameters that can be adjusted on get around this problem.

Thanks

Lars


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux