Isn't it likely that a single stream (or perhaps one that can be partitioned across spindles) will tend to be fastest, since it has a nice localised stream that a) allows for compression of reasonable blocks and b) fits with commit aggregation? RAM capacity on servers is going up and up, but the size of a customer address or row on an invoice isn't. I'd like to see an emphasis on speed of update with an assumption that most hot data is cached, most of the time. My understanding also is that storing data columnwise is handy when its persisted because linear scans are much faster. Saw it once with a system modelled after APL, blew me away even on a sparc10 once the data was organised and could be mapped. Still, for the moment anything that helps with the existing system would be good. Would it help to define triggers to be deferrable to commit as well as end of statement (and per row)? Seems to me it should be, at least for ones that raise 'some thing changed' events. And/or allow specification that events can fold and should be very cheap (don't know if this is the case now? Its not as well documented how this works as I'd like) James -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.446 / Virus Database: 268.18.7/713 - Release Date: 07/03/2007 09:24