On 31 December 2011 00:54, Simon Windsor <simon.windsor@xxxxxxxxxxxxxxx> wrote: > I am struggling with the volume and number of XML files a new application is > storing. The table pg_largeobjects is growing fast, and despite the efforts > of vacuumlo, vacuum and auto-vacuum it keeps on growing in size I can't help but wonder why you're using large objects for XML files? Wouldn't a text-field be sufficient? Text-fields get toasted, that would safe you some space. Another option would be to use xml-fields, but that depends on whether you have valid XML and whether you have any desire to make use of any xml-specific features such fields provide. There will probably be a performance hit for this. I do realise that you can stream large objects, that's a typical use-case for choosing for them, but with XML files that doesn't seem particularly useful to me; after all, they're not valid if not complete. You have to read the whole file into memory _somewhere_ before you can interpret them meaningfully. The exception to that rule is if you're using a SAX-parser (which also explains why those parsers usually have fairly limited features). Of course there are valid reasons for choosing to use large objects for XML files, I assume yours are among them. If they're not, however, maybe you should have a thorough look at your problem again. -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general