Hi Eric,
Zitat von Eric Wheeler <bcache@xxxxxxxxxxxxxxxxxx>:
On Wed, 11 May 2016, Jens-U. Mozdzen wrote:
Hi *,
Zitat von Eric Wheeler <bcache@xxxxxxxxxxxxxxxxxx>:
>[...]
>I think this is the first time I've heard of Ceph being used in a bcache
>stack on the list. Are there any others out there with success? If so,
>what kernel versions and disk stack configuration?
After an extensive test period, we have just started a productive Ceph
environment on our bcache-based SAN servers:
- MD-RAID6 (several SAS disks) as bcache backing device
- MD-RAID1 (two SAS SSDs) as bcache cache device, only for that
single backing
device
- LVM on top of /dev/bcache0
- LVs, xfs-formatted, mounted at a convenient place, used by OSDs
So no ceph here?
Ceph OSDs. This differs from the OP's situation in that the bcache
device isn't given to Ceph directly. But IIRC, Ceph does the same
(partition the device and put XFS file systems on these partitions).
And as already pointed out, we (currently) don't use the single SSD
for multiple backing stores for these systems (but do so on other
devices with high i/o load).
I responded because you seemed to be asking for any stack setup
involving bcache and Ceph.
If you're using 4.4.y then you definitely need the patch from Ming Lei.
[...]
OTOH, 4.1 is rock solid. As of 4.1.21 or so it has all of the bcache
stability fixes to date.
[...]
Same as above I think. When bcache writebacks in opt_io sized writes that
exceed 256 bvecs then you run into issues. It only does that if
raid_partial_stripes_expensive=1 like raid5/6 when it tries to prevent
re-writes to the same stride.
kernel on our SAN nodes is 4.1.13-5-default (64 bit)
As mentioned, we're currently on 4.1.13, so no need to worry for me.
Thank you for clarifying!
Regards,
Jens
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html