I'm building a server to do some kvm virtualization. Right now, I've got 4 2TB Western Digital RE4 hard disks, and 2 256G Samsung 840 Pro SSD's. My initial intent was to build a RAID10 from the 2 TB hard disks, and a RAID1 from the SSD's, have two separate lvm volume groups, and manually partition file systems between them based on the need for performance. I was wondering if it would make a good use case for bcache to instead use the 256G RAID1 as a cache for backing the 4TB RAID10? Ideally that would enhance the performance of all I/O, rather than just whatever filesystems were manually placed on the SSD's in my original plan. >From my research so far, it seems that bcache in the 3.10 kernel is considered stable enough for production, and there shouldn't be any issues using software RAID for the backing and cache devices? Even with writeback enabled? Any gotchas or pointers for this potential deployment? Thanks much. -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html