>On sequential read speed HDs outperform flash disks... only on random >access the flash disks are better. So if your application is a DW one, >you're very likely better off using HDs. This looks likely to be a non-issue shortly, see here: http://www.reghardware.co.uk/2007/03/27/sams_doubles_ssd_capacity/ I still think this sort of devices will become the OLTP device of choice before too long - even if we do have to watch the wear rate. >WARNING: modern TOtL flash RAMs are only good for ~1.2M writes per >memory cell. and that's the =good= ones. Well, my original question was whether the physical update pattern of the server does have hotspots that will tend to cause a problem in normal usage if the wear levelling (such as it is) doesn't entirely spread the load. The sorts of application I'm interested in will not update individual data elements very often. There's a danger that index nodes might be rewritted frequently, but one might want to allow that indexes persist lazily and should be recovered from a scan after a crash that leaves them dirty, so that they can be cached and avoid such an access pattern. Out of interest with respect to WAL - has anyone tested to see whether one could tune the group commit to pick up slightly bigger blocks and write the WAL using compression, to up the *effective* write speed of media? Once again, most data I'm interested in is far from a random pattern and tends to compress quite well. If the WAL write is committed to the disk platter, is it OK for arbitrary data blocks to have failed to commit to disk so we can recover the updates for committed transactions? Is theer any documentation on the write barrier usage? James -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.446 / Virus Database: 268.18.25/744 - Release Date: 03/04/2007 05:32