On 10/26/2016 11:58 AM, Natalie Wenz wrote:
Hi all,
I am seeing some performance issues that I'm trying to track down on a large database. One of the things I'm beginning to suspect is a particularly large table with many columns, about 200 of which (type text) contain large chunks of data. Now, for a given row, maybe 10-30 of those columns contain data, so not all 200 for each row, but the data can still be pretty sizable. There are currently around 750 million records in this table (and is about 22TB in size). I was trying to learn more about toast, and I see some references in the wiki and the hackers list to performance issues when you approach the 4 billion oids for a single table (which, I gather, are used when the data is toasted). Given my rudimentary understanding of how the whole toast thing works, I was wondering if there is a way to see how many oids are used for a table, or another way to know if we're running into toast limits for a single table.
What I was reading, for reference:
https://wiki.postgresql.org/wiki/TOAST
http://osdir.com/ml/postgresql-pgsql-hackers/2015-01/msg01901.html
Also, we are running postgres 9.5.4.
oids are used per relation not per row.
Your problem is likely more to do with the number of rows + the width of
the table. I suggest finding a way to partition this table.
Sincerely,
JD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin