Re: Huge shared hit for small table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 04, 2019 at 07:38:40PM +0000, Scott Rankin wrote:
> In the staging environment, we get this:
> 
> Index Scan using "programPK" on public.program prog  (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)
>       Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type
>       Index Cond: (prog.id = per.program)
>       Buffers: shared hit=87860
> 
> In the production environment, we get this:
> 
> Index Scan using "programPK" on public.program prog  (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)
>        Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type
>        Index Cond: (prog.id = per.program)
>        Buffers: shared hit=25437716
> 
> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.

I think it's because some heap pages are being visited many times, due to the
index tuples being badly "fragmented".  Note, I'm not talking about
fragmentation of index *pages*, which is what pgstattuple reports (which
wouldn't have nearly so detrimental effect).  I could probably say that the
index tuples are badly "correlated" with the heap.

I'm guessing there are perhaps 25437716/87860 = 290 index tuples per page, and
they rarely point to same heap page as their siblings.  "Hit" means that this
affects you even though it's cached (by postgres).  So this is apparently slow
due to reading each page ~300 times rather than once to get its tuples all at
once.

> This has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away.  Now it seems to be back, and I’m not sure what to make of it.

..which is consistent with my hypothesis.

You can use pg_repack or CREATE INDEX+DROP+RENAME hack (which is what pg_repack
-i does).  In a fresh index, its tuples are sorted by heap TID.  You could
CLUSTER the table itself (or pg_repack -t) on that index column.

In PG v12 you can use REINDEX CONCURRENTLY (but beware there's a crash
affecting its progress reporting, fix to be included in v12.1).

Justin





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux