On 3/19/25 10:36 AM, Siraj G wrote:
Hi Adrian
Used this query to find the sizes:
select relname AS object_name,relkind AS
object_type,pg_size_pretty(pg_relation_size(oid)) AS object_size
FROM pg_class WHERE relnamespace = (SELECT oid FROM pg_namespace WHERE
nspname = 'pg_catalog')
ORDER BY pg_relation_size(oid) DESC;
We have close to 2000 tables, not sure about the columns. There should
be around 120 - 150 tables that are active, getting the data from DMS.
This would have been a good thing to have included in the original post.
We do not create/drop tables in our instance unless required.
This is a cloud managed SQL and we do not have any custom setting on the
vacuum part.
To be clear the issue is in the Google Cloud SQL instance?
Or are you seeing the problem on the Postgres instance the data is
coming from?
On Wed, Mar 19, 2025 at 10:47 PM Adrian Klaver
<adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
On 3/19/25 10:06, Siraj G wrote:
> Hello!
>
> I have a PG (v16) instance which is occupying around 1TB of
storage. Out
Exact version of Postgres 16, include the x in 16.x.
> of this, around 350GB is occupied by the table
pg_catalog.pg_attribute.
> Why is the catalog table's size so big?
>
> Here are the sizes:
How did you measure the sizes?
pg_attribute maintains information about table columns, how many table
columns do you have?
Are you creating/dropping tables on a regular basis?
Is autovacuum running properly?
Have you run VACUUM manually on pg_attribute?
>
> pg_attribute
>
> 338 GB
> pg_attribute_relid_attnam_index
>
> 117 GB
> pg_attribute_relid_attnum_index
>
> 69 GB
>
>
> I think this table must have tons of dead tuples. Please suggest
to me
> if we can purge any data/shrink the size of this table.
>
> REgards
> Siraj
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx