Re: ogr2ogr slow sql when checking system tables for column info and so on.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






From: Julien Rouhaud <rjuju123@xxxxxxxxx>
Sent: Friday, October 21, 2022 11:48 AM
To: Lars Aksel Opsahl <Lars.Opsahl@xxxxxxxx>
Cc: pgsql-performance@xxxxxxxxxxxxxxxxxxxx <pgsql-performance@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: ogr2ogr slow sql when checking system tables for column info and so on.
 
>From: Julien Rouhaud <rjuju123@xxxxxxxxx>
>Sent: Friday, October 21, 2022 11:48 AMTo: Lars Aksel Opsahl <Lars.Opsahl@xxxxxxxx>Cc: pgsql-performance@xxxxxxxxxxxxxxxxxxxx <pgsql-performance@xxxxxxxxxxxxxxxxxxxx>Subject: Re: ogr2ogr slow sql when checking system tables for column info and so on.
>
>Hi,
>

>> Here https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample
>> [...]
>> ->  Seq Scan on pg_attribute a  (rows=1464751) (actual time=0.028..17740.663
>> [...]
>> Then we take copy of the pg_catalog tables involved.
>>
>> Here https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.
>> [...]
>> ->  Seq Scan on pg_attribute a  (rows=1452385) (actual time=0.006..156.392
>>
>> I do not understand way it's so much slower to use the tables in  pg_catalog than in test_pg_metadata tables because they have the same content.
>
>In both case you have a sequential scan over the pg_attribute table, but for
>pg_catalog it takes 17 seconds to retrieve the 1.4M rows, and in the new table
>it takes 156 ms.
>
>It looks like you catalog is heavily bloated, which is the cause of the
>slowdown.
>
>You could do a VACUUM FULL of the tables in pg_catalog but it would only be a
>short term fix as it's likely that your catalog will get bloated again.  Do you
>rely a lot on temporary tables?  If yes it can easily lead to this kind of side
>effect, and you should modify you code to perform manual vacuum of catalogs
>tables very often, or add a dedicated high frequency task for running similar
>vacuum and keep the bloat under control.

Hi

Yes we use a lot of temp tables sometimes .

With "VACUUM FULL ANALYZE " we got the same time as from the created tables https://explain.depesz.com/s/Yxy9 so that works.

OK then we start up by trigger a 'VACUUM FULL ANALYZE ' for all the tables in th pg_catalog because this seems to be only thing that is working for now.

I assume that adding more indexes on the tables in pg_catalog to avoid tables scans are not  that easy.

Thanks for your help.

Lars


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux