On Mon, 16 May 2016, Yannis Aribaud wrote: > Hi, > > > 11 mai 2016 03:10 "Eric Wheeler" <bcache@xxxxxxxxxxxxxxxxxx> a écrit: > > Can you describe your disk stack in more detail? What is below ceph? > > > > I think this is the first time I've heard of Ceph being used in a bcache > > stack on the list. Are there any others out there with success? If so, > > what kernel versions and disk stack configuration? > > I'm using one SSD as caching device (one cache set) and several SATA drives > as backing devices. > No RAID, no MD, no LVM, nothing fancy, only bare devices and Bcache. > > Bcache devices are XFS formatted and used by Ceph OSD (journaling on the same > device). One OSD per bcache device. What bcache bucket size are you using? Review this thread and see if it sounds similar: http://www.spinics.net/lists/linux-bcache/msg03796.html I wonder if XFS is sending writes down that are too large as speculated in the thread above. Please also try these two patches and see if they help: https://lkml.org/lkml/2016/4/5/1046 http://www.spinics.net/lists/raid/msg51830.html -- Eric Wheeler > > > So you are using one cache for multiple backing devices in a single cache > > set? I remember seeing a thread on the list about someone having a > > similar issue (multiple backends, but not ceph). I put some time into > > looking for the thread, it might be this one: > > bcache: Fix writeback_thread never writing back incomplete stripes. > > > > but there was a patch for that which should be in 4.4.y back in March. > > > > Make sure you have this commit: > > > > https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=a556b804dfa654f054f3 > > 304c2c4d274ffe81f92 > > My kernel already has this commit. > > > Also, does your backing device(s) set raid_partial_stripes_expensive=1 in > > queue_limits (eg, md raid5/6)? I've seen bugs around that flag that might > > not be fixed yet. > > Nope. This is set to 0. -- Eric Wheeler > > Regards, > -- > Yannis Aribaud >