On 3/13/06, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Brandon Keepers <bkeepers@xxxxxxxxx> writes: > > Thanks for your quick response! I had actually just been trying that > > (with 7.1) and came across another error: > > > NOTICE: ShmemAlloc: out of memory > > NOTICE: LockAcquire: xid table corrupted > > dumpBlobs(): Could not open large object. Explanation from backend: > > 'ERROR: LockRelation: LockAcquire failed > > Ugh :-( How many blobs have you got, thousands? 7.0 stores each blob > as a separate table, and I'll bet it is running out of lock table space > to hold a lock on each one. My recollection is that we converted blob > storage to a single pg_largeobject table precisely because of that > problem. Looks like there's over 17,000 blobs. :( But they're all very small, if that makes a difference. > What you'll need to do to get around this is to export each blob in a > separate transaction (or at least no more than a thousand or so blobs > per transaction). It looks like pg_dumplo might be easier to hack to do > things that way --- like pg_dump, it puts a BEGIN/COMMIT around the > whole run, but it's a smaller program and easier to move those commands > in. Unfortunately, I don't know C. Would someone be willing to help me hack pg_dumplo in exchange for money? > Another possibility is to increase the lock table size, but that would > probably require recompiling the 7.0 backend. If you're lucky, > increasing max_connections to the largest value the backend will support > will be enough. If you've got many thousands of blobs there's no hope > there, but if it's just a few thousand this is worth a try before you go > hacking code. I'm not the admin of the box that this database is on, so I don't have any control over it. I'm working on moving it to a box that I am the admin of. But anyway, it sounds like this wouldn't work anyway since I have so many blobs. > regards, tom lane Thanks, again for your help, Tom! Brandon