On Tue, 27 Jun 2017, Henk Slager wrote: > > On top of that bcache device, I decided to add an LVM layer (making > > /dev/bcache* a PV), which is really helpful i.e. with the base problem of > > this thread: You can create additional bcache devices and pvmove your data > > to the new PV, without interrupting production. > > Yes, if you start from scratch with some disk setup, including LVM can > give you a lot of flexibility and almost always-online. > >> I > >> think I would choose to add bcache to each of the four harddisks, > > > > > > If you'd do that with a single caching device, you're in for contention. My > > gut feeling tells me that running a single bcache backing device/caching > > device combo on top of MD-RAID is less straining than running MD-RAID across > > a bunch of bcache devices with a common caching device: The MD advantage of > > spreading the load across multiple "disks" is countered by accessing a > > common SSD. > > You are right, if you use MD-RAID, after second thought, bcache on top > of MD is a better choice in most cases as far as I can see. I have > been thinking to use RAID of MD instead of btrfs raid. I tried/tested > it with 3-disk RAID5 for some time, but still decided to use only > btrfs for handling multipe devices. Definitely. bcache atop of md, not md atop of bcache. You would confuse the sequential-write logic with md on bcache. Also, with bcache on md, align using --data-offset. 8k is most certainly not your md chunk size and you don't want write amplification. You might wish to align to your stride width (especially if raid5/6), so --data-offset=chunk_size*num_data_disks in sectors. -- Eric Wheeler -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html