> On top of that bcache device, I decided to add an LVM layer (making > /dev/bcache* a PV), which is really helpful i.e. with the base problem of > this thread: You can create additional bcache devices and pvmove your data > to the new PV, without interrupting production. Yes, if you start from scratch with some disk setup, including LVM can give you a lot of flexibility and almost always-online. >> I >> think I would choose to add bcache to each of the four harddisks, > > > If you'd do that with a single caching device, you're in for contention. My > gut feeling tells me that running a single bcache backing device/caching > device combo on top of MD-RAID is less straining than running MD-RAID across > a bunch of bcache devices with a common caching device: The MD advantage of > spreading the load across multiple "disks" is countered by accessing a > common SSD. You are right, if you use MD-RAID, after second thought, bcache on top of MD is a better choice in most cases as far as I can see. I have been thinking to use RAID of MD instead of btrfs raid. I tried/tested it with 3-disk RAID5 for some time, but still decided to use only btrfs for handling multipe devices. W.r.t. contention for 1 SSD bcaching 4 HDDs: I can definitely notice this for some access patterns/tasks. I once chose btrfs raid10 when all HDD were much older/slower and I wanted high enough transfer rates on file level. But the HDDs are now all newer/faster model and the strict need for high transfer rates is gone. It currently makes more sense to use less SATA ports and also less spinning disks and re-use HDD(s) for redundancy/backup on a file-system-level. -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html