On 02/19/2013 06:23 AM, Roy Sigurd Karlsbakk wrote: > ----- Opprinnelig melding ----- >> ZFS is not just a filesystem. It is a complete block device storage >> management stack that has a filesystem component. > I know ZFS quite well, so I'm aware of the comparison is somewhat unfair. Unfair is an understatement. But I also think it is quite a sham how many things that Sun and now Oracle has been bromising to put in for years that seem now to be fairy tails. The two biggest ones that I can think of are VDEV removal and RAIDZ[2,3] stripe expansion. I've been working with ZFS since I was able to get the first internal alpha from the team at Sun that would apply to the Nevada dev tree. I think that was about build 17. I believe it was 27a that it finally made it in to the ON consolidation. Really smart bunch of guys working on that at the time. Almost all of them have jumped ship now though. > >> The Linux tool chain can be more complicated but it is vastly more >> flexible. With the addition of a caching target all of the pieces are >> there to be able to build a tiering storage system in any way your use >> case needs it to be. > Still, flashcache/enhanceio seems to be able to handle this in a very flexible way. I really don't want to recreate my home RAID (7,2TiB) just to add cache to it… I can agree with this on principal but having to re-do it for cache can have some other positive effects. Some that I can think of is it give you an opportunity to get LV's re-aligned if your like most and have extended them one or more times, Perhaps your volume layout is not ideal and you have been putting off fixing that, etc... Chances are that if your wanting to add a cache the filesystems are relatively busy and may benefit from being re-created to reduce file fragmentation. It would be very nice to be able to do but in order for it to happen your entire block tool chain would have to be prepared for it to be a possibility for it to happen. That's why ZFS can do that. A question for Kent, once you have bcache and it's tools built, installed and running, is there anything to stop a user from always tagging devices of whatever type you choose from having the superblock info to accept a cache dynamically? Example, if I create an MD RAID device and before I pvcreate or anything else with it I prep it for bcache but don't actually attach a cache device, is there any negative effects that can come from that? Can I then at anytime attach a cache device to it? I realize that once attached in writeback it becomes non-detachable. Same question for raw sd devices and LVM volumes. > Vennlige hilsener / Best regards > > roy > -- > Roy Sigurd Karlsbakk > (+47) 98013356 > roy@xxxxxxxxxxxxx > http://blogg.karlsbakk.net/ > GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt > -- > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk. -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html