po 8. 4. 2019 v 16:55 odesílatel Krzysztof Plocharz <plocharz@xxxxxxxxxxxxxx> napsal:
On 2019/04/08 16:42, Justin Pryzby wrote:
> On Mon, Apr 08, 2019 at 04:33:34PM +0200, Pavel Stehule wrote:
>> po 8. 4. 2019 v 16:11 odes�latel Krzysztof Plocharz <plocharz@xxxxxxxxxxxxxx> napsal:
>>
>>> We have some very strange query planning problem. Long story short it
>>> takes 67626.278ms just to plan. Query execution takes 12ms.
>>>
>>> Query has 7 joins and 2 subselects.
>>> It looks like the issue is not deterministic, sometimes is takes few ms
>>> to plan the query.
>>>
>>> One of the tables has 550,485,942 live tuples and 743,504,012 dead
>>> tuples. Running ANALYZE on that tables solves the problem only temporarily.
>>>
>>> Question is how can we debug what is going on?
>>
>> please check your indexes against bloating. Planner get min and max from
>> indexes and this operation is slow on bloat indexes.
Yes, we thought about this, there are over 700,000,000 dead tuples. But
as you said, it should not result in 67 second planning...
>
> I think that's from get_actual_variable_range(), right ?
>
> If it's due to bloating, I think the first step would be to 1) vacuum right
> now; and, 2) set more aggressive auto-vacuum, like ALTER TABLE t SET
> (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005).
>
We did pgrepack and it did help, but is it possible for
get_actual_variable_range to take over 60 seconds?
Is there any other workaround for this except for pgrepack/vacuum?
Anyway to actually debug this?
you can use perf and get a profile.
> What version postgres server ?
>
> Justin
>
>
On 2019/04/08 16:33, Pavel Stehule wrote:>
>
> po 8. 4. 2019 v 16:11 odesílatel Krzysztof Plocharz
> <plocharz@xxxxxxxxxxxxxx <mailto:plocharz@xxxxxxxxxxxxxx>> napsal:
>
> Hi
>
> We have some very strange query planning problem. Long story short it
> takes 67626.278ms just to plan. Query execution takes 12ms.
>
> Query has 7 joins and 2 subselects.
> It looks like the issue is not deterministic, sometimes is takes
few ms
> to plan the query.
>
> One of the tables has 550,485,942 live tuples and 743,504,012 dead
> tuples. Running ANALYZE on that tables solves the problem only
> temporarily.
>
> Question is how can we debug what is going on?
>
>
> please check your indexes against bloating. Planner get min and max from
> indexes and this operation is slow on bloat indexes.
>
Yes, we thought about this, there are over 700,000,000 dead tuples. But
as you said, it should not result in 67 second planning...
> but 67 sec is really slow - it can be some other other problem - it is
> real computer or virtual?
>
real, with pretty good specs: NVME drives, Six-Core AMD Opteron, 64GB of
ram. During testing system was mostly idle.
>
> Best Regards,
> Krzysztof Płocharz
>
>