On Wed, 2009-04-15 at 09:51 -0400, Tom Lane wrote: > Brian Cox <brian.cox@xxxxxx> writes: > > I changed the logic to update the table in 1M row batches. However, > > after 159M rows, I get: > > > ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of > > 8192 bytes at block 7621407 > > You're out of disk space. > > > A df run on this machine shows plenty of space: > > Per-user quota restriction, perhaps? > > I'm also wondering about temporary files, although I suppose 100G worth > of temp files is a bit much for this query. But you need to watch df > while the query is happening, rather than suppose that an after-the-fact > reading means anything. Anytime we get an out of space error we will be in the same situation. When we get this error, we should * summary of current temp file usage * df (if possible on OS) Otherwise we'll always be wondering what caused the error. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance