Ric Wheeler wrote: > One way to test this with reasonable, commodity hardware would be > something like the following: > > (1) Get an automated power kill setup to control your server etc. Good plan. Another way to test the entire software stack, but not the physical disks, is to run the entire test using VMs, and simulate hard disk write caching and simulated power failure in the VM. KVM would be a great candidate for that, as it runs VMs as ordinary processes and the disk I/O emulation is quite easy to modify. As most issues probably are software issues (kernel, filesystems, apps not calling fsync, or assuming barrierless O_DIRECT/O_DSYNC are sufficient, network fileserver protocols, etc.), it's surely worth a look. It could be much faster than the physical version too, in other words more complete testing of the software stack given available resources. With the ability to "fork" a running VM's state by snapshotting it and continuing, it would even be possible to simulate power failure cache loss scenarios at many points in the middle of a stress test, with the stress test continuing to run - no full reboot needed at every point. That way, maybe deliberate trace points could be placed in the software stack at places where power failure cache loss seems likely to cause a problem. -- Jamie -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html