Hi All;
We have a client that has a table where large blobs (bytea) are stored.
the table has a key column that is numbers (like 112362) but
unfortunately it's a varchar column so the blobs are accessed via
queries like:
select * from bigtable where keycol = '217765'
The primary reason we want to partition the table is for maintenance,
the table is well over 1.2 Terabytes in size and they have never
successfully vacuumed it. However I don't want to make performance even
worse. The table does have a serial key, I'm thinking the best options
will be to partition by range on the serial key, or maybe via the keycol
character column via using an in statement on the check constraints,
thus allowing the planner to actually leverage the above sql. I suspect
doing a typecast to integer in the check constraints will prove to be a
bad idea if the keycol column remains a varchar.
Thoughts?
Here's the table:
Table "problemchild"
Column | Type |
Modifiers
-----------+--------------------------+--------------------------------------------------------------------
keycol | character varying |
blob_data | bytea |
removed_date | timestamp with time zone |
alt_key | bigint | not null default
nextval('problemchild_alt_key_seq'::regclass)
Indexes:
"pc_pkey" PRIMARY KEY, btree (alt_key)
"key2" btree (keycol)
Thanks in advance
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance