I think I see a (my) fatal flaw that will cause the cluster to fail.
From the info I received from previous posts, I am going to change
my game plan. If anyone has thoughts as to different process or
can confirm that I am on the right track, I would appreciate your
input.
1. I am going to run a CLUSTER on the table instead of a VACUUM
FULL.
Kevin Grittner stated:
If you have room for a second copy of your data, that is almost
always much faster, and less prone to problems.
I looked at the sizes for the tables in the database and the table I am
trying to run the cluster on is 275G and I only have 57G free. I don't
know how much of that 275G has data in it and how much is empty to allow
for a second copy of the data. I am guessing the cluster would fail due
to lack of space.
Are there any other options??
If I unload the table to a flat file; then drop the table from the
database; then recreate the table; and finally reload the data - will
that reclaim the space?
Kevin - thanks for the book recommendation. Will order it tomorrow.
Thanks again for all the technical help!
Dave
begin:vcard
fn:David Ondrejik
n:Ondrejik;David
org:weather.gov/marfc;NOAA's National Weather Service
adr;dom:Suite #330;;328 Innovation Blvd;State College;PA;16803
email;internet:david.ondrejik@xxxxxxxx
title:Senior Hydrologist. Mid-Atlantic River Forecast Center
tel;work:814-231-2401
version:2.1
end:vcard
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin