Hi *,
Zitat von Eric Wheeler <bcache@xxxxxxxxxxxxxxxxxx>:
On Mon, 2 May 2016, Yannis Aribaud wrote:
[...]
I think this is the first time I've heard of Ceph being used in a bcache
stack on the list. Are there any others out there with success? If so,
what kernel versions and disk stack configuration?
After an extensive test period, we have just started a productive Ceph
environment on our bcache-based SAN servers:
- MD-RAID6 (several SAS disks) as bcache backing device
- MD-RAID1 (two SAS SSDs) as bcache cache device, only for that single
backing device
- LVM on top of /dev/bcache0
- LVs, xfs-formatted, mounted at a convenient place, used by OSDs
kernel on our SAN nodes is 4.1.13-5-default (64 bit), as distributed
by OpenSUSE Leap 42.1 (SUSE makes sure vital bcache patches are
included, amongst others).
We're planning to later switch to a similar setup like the OP is
running, using separate disks with a common bcache caching device for
OSDs.
While we have not stressed the Ceph part yet on the productive system
(there's plenty of other data served by SCST/FC, NFS, SaMBa and
others), we did not yet run into problems and especially no kernel
crashes.
[...]
Also, does your backing device(s) set raid_partial_stripes_expensive=1 in
queue_limits (eg, md raid5/6)? I've seen bugs around that flag that might
not be fixed yet.
This does sound disturbing to me - could you please give more details,
probably in a new thread?
Regards,
Jens
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html