After dropping an index to do some full-table updating, I'm running into an out of memory issue recreating one of my indices. This is on 8.3 running on linux. The table in question has about 300m rows. The index is on a single integer column. There are approximately 4000 unique values among the rows. create index val_datestamp_idx on vals(datestamp) tablespace space2; About 30 seconds into the query, I get: ERROR: out of memory DETAIL: Failed on request of size 536870912. Increasing maintenance_work_mem from 1GB to 2GB changed nothing at all- exact same error at exact same time. Watching memory on the machine shows the out of memory error happens when the machine is only at about 35% user. create index concurrently shows an identical error. Two other indexes (multicolumn) on the same table have already been successfully recreated, so this puzzles me. Actually, while I was writing this, I added an additional column to the index and it now appears to be completing (memory has reached about the point it had been failing at and is now holding steady, and the query has been going for significantly longer than the 30 seconds or so it took to error out previously). I sort by both columns at times, so the extra column may in fact turn out to be useful, but the failure of the single column create index in the face of the other successful creates has me confused. Can anyone shed some light on the situation? -- - David T. Wilson david.t.wilson@xxxxxxxxx -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general