2017-02-22 10:59 GMT+13:00 Adrian Klaver <adrian.klaver@xxxxxxxxxxx>:
On 02/21/2017 01:44 PM, Patrick B wrote:
> Hi guys,
>
> I've got a lot of bloat indexes on my 4TB database.
>
> Let's take this example:
>
> Table: seg
> Index: ix_filter_by_tree
> Times_used: 1018082183
> Table_size: 18 GB -- wrong. The table is mostly on pg_toast table.
> Its real size is 2TB
How do you know one number is right and the other is wrong?
1. on that table (seg) i store binary data. It is impossible to have only 18GB of it.
2.
SELECT schema_name,pg_size_pretty(sum(table_size)::bigint), (sum(table_size) / pg_database_size(current_database())) * 100 FROM (SELECT pg_catalog.pg_namespace.nspname as schema_name, pg_relation_size(pg_catalog.pg_class.oid) as table_size FROM pg_catalog.pg_classJOIN pg_catalog.pg_namespace ON relnamespace = pg_catalog.pg_namespace.oid) tGROUP BY schema_nameORDER BY schema_name
pg_toast 2706 GB 82.62112838877240860000 <-- this belongs to the seg table.
Have you looked at the functions here?:
https://www.postgresql.org/docs/9.6/static/functions-admin. html#FUNCTIONS-ADMIN-DBOBJECT
--
> Index_size: 17 GB
> Num_writes 16245023
> Index definition: CREATE INDEX ix_filter_by_tree ON seg USING btree
> (full_path varchar_pattern_ops) WHERE (full_path IS NOT NULL)
>
>
>
> What is the real impact of a bloat index? If I reindex it, queries will
> be faster?
>
> Thanks
> Patrick
Adrian Klaver
adrian.klaver@xxxxxxxxxxx