haraldarminmassa@xxxxxxxxx ("Harald Armin Massa") writes: >> Not likely to change in the future, no. Slony uses triggers to manage the >> changed rows. We can't fire triggers on large object events, so there's no >> way for Slony to know what happened. > > that leads me to a question I often wanted to ask: > > is there any reason to create NEW PostgreSQL databases using Large > Objects, now that there is bytea and TOAST? (besides of legacy needs) > > as much as I read, they take special care in dump/restore; force the > use of some special APIs on creating, do not work with Slony .... They are useful if you really need to be able to efficiently access portions of large objects. For instance, if you find that you frequently need to modify large objects, in place, that should be much more efficient using the LOB interface than it would be using a bytea column. It ought to be a lot more efficient to lo_lseek() to a position, lo_read() a few bytes, and lo_write() a few bytes than it is to pull the entire 42MB object out, read off a fragment, and then alter the tuple. That being said, I generally prefer bytea because it doesn't force me into using a pretty weird "captive interface" to access the data. If I found myself needing to make wacky updates on a large object, I'd wonder if it wouldn't be better to have it expressed as a set of tuples so that I'd not have a large object in the first place... -- (format nil "~S@~S" "cbbrowne" "linuxdatabases.info") http://www3.sympatico.ca/cbbrowne/x.html "... They are not ``end users'' until someone presupposes them as such, as witless cattle." -- <craig@xxxxxxxxxxx> ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings