Afaics, TOAST was invented so that big attributes wouldn't be in the way (of readahead, the buffer cache and so on) when working with the other attributes. This is based on the assumption that the other attributes are accessed more often than the whole contents of the big attributes. Now I wonder how disk blocks are used when loading a dump with big text data so that out-of-line storage is used. If disk block usage was following this pattern: heap page toast heap page 1 toast heap page .. toast heap page n heap page toast heap page 1 toast heap page .. toast heap page n heap page toast heap page 1 toast heap page .. toast heap page n heap page If further the assumption is correct, that the granularity of the lower level chaches is bigger than the PostgreSQL page size, then that would mean that loading from a dump destroys the advantage of out-of-line storage. I haven't got any numbers to back this theory up. What do you think? Markus Bertheau ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match