On Tue, Feb 12, 2013 at 09:41:20PM +0100, Roy Sigurd Karlsbakk wrote: > Wouldn't it be better to allow the sysadmin to determine the safety? The sysadmin can override, but - say you're using bcache for your root filesystem and after a reboot the SSD doesn't come up, for whatever reason. How's the sysadmin supposed to know for sure whether there was dirty data in the cache? There's no way to reliably track that without bcache tracking it itself in the backing device's superblock. Say there wasn't any dirty data in the cache, so you can safely run without the cache device - so you do, so you can boot up and use your machine. Then later you figure out what's wrong with the SSD (cable got unplugged?), so you reenable caching. But the cache is now inconsistent - you _cannot_ use that cached data. With the backing device superblock, bcache can trivially note that the cache is out of sync and make sure that that cached data isn't used. If we didn't have that, the sysadmin would have to make _sure_ to use the right flag when reattaching to specify cached data shouldn't be used, otherwise he just corrupted all his data. > Also, if running in writethrough, why shouldn't an SSD be allowed to be added in realtime? If you're running in writethrough, you'd already be caching... not sure what you mean? > All of this works well with systems like ZFS. I really don't see a reason for a filesystem being created to allow caching ZFS is a filesystem that also does caching, bcache isn't a filesystem. Did you get something backwards...? -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html