http://tracker.ceph.com/issues/22754
This is a bug in Luminous for cephfs volumes. This is not anything you're doing wrong. The mon check for removing a cache tier only checks that it's EC on CephFS and says no. The above tracker has a PR marked for backporting into Luminous to respond yes if ec_overwrites are enabled. At this point it looks like this will likely be included in 12.2.4, I don't think it was ready in time for 12.2.3.
If everything is working fine for you with the cache in forward mode, you can leave it there. Alternatively you can start testing the cache settings in overwrite to only promote items after they've been requested a few times. Historically with CephFS, EC, and cache tiers you could only set the hit_set_count to 1 because every object read needed to be promoted. I haven't tested this yet myself, but you should be able to set the hit_set_count to 100 or something and then your cache would only ever have things freshly written to CephFS and not promote things that are being read. Write on read is bad for performance.
Or you can be lazy like me and just revert the cache to what it's been doing for years and wait for the bug fix to be released.
On Wed, Feb 14, 2018 at 6:36 AM Kenneth Waegeman <kenneth.waegeman@xxxxxxxx> wrote:
Hi all,
I'm trying to remove the cache from a erasure coded pool where all osds
are bluestore osds and allow_ec_overwrites is true. I followed the steps
on http://docs.ceph.com/docs/master/rados/operations/cache-tiering/, but
with the remove-overlay step I'm getting a EBUSY error:
root@ceph001 ~]# ceph osd tier cache-mode cache forward
--yes-i-really-mean-it
set cache-mode for pool 'cache' to forward
[root@ceph001 ~]# rados -p cache cache-flush-evict-all
[root@ceph001 ~]# rados -p cache ls
[root@ceph001 ~]# ceph osd tier remove-overlay ecdata
Error EBUSY: pool 'ecdata' is in use by CephFS via its tier
[root@ceph001 ~]# ceph osd pool set ecdata allow_ec_overwrites true
set pool 7 allow_ec_overwrites to true
[root@ceph001 ~]# ceph osd tier remove-overlay ecdata
Error EBUSY: pool 'ecdata' is in use by CephFS via its tier
I tried this with an fs with replicated pool as backend, and this worked.
Is there another thing I should set to make this possisble?
I'm on Luminous 12.2.2
Thanks!
Kenneth
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com