It's fixed since v0.94.6, http://ceph.com/releases/v0-94-6-hammer-released/
- fs: CephFS restriction on removing cache tiers is overly strict (issue#11504, pr#6402, John Spray)
but you have to make sure your release patched.
2017-01-10 16:52 GMT+08:00 Nick Fisk <nick@xxxxxxxxxx>:
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@lists.ceph.com ] On Behalf Of Wido den Hollander
> Sent: 10 January 2017 07:54
> To: ceph new <ceph-users@xxxxxxxxxxxxxx>; Stuart Harland <s.harland@livelinktechnology.net >
> Subject: Re: Write back cache removal
>
>
> > Op 9 januari 2017 om 13:02 schreef Stuart Harland <s.harland@livelinktechnology.net >:
> >
> >
> > Hi,
> >
> > We’ve been operating a ceph storage system storing files using librados (using a replicated pool on rust disks). We implemented a
> cache over the top of this with SSDs, however we now want to turn this off.
> >
> > The documentation suggests setting the cache mode to forward before draining the pool, however the ceph management
> controller spits out an error about this saying that it is unsupported and hence dangerous.
> >
>
> What version of Ceph are you running?
>
> And can you paste the exact command and the output?
>
> Wido
Hi Wido,
I think this has been discussed before and looks like it might be a current limitation. Not sure if it's on anybody's radar to fix.
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/ msg24472.html
Nick
>
> > The thing is I cannot really locate any documentation as to why it’s considered unsupported and under what conditions it is expected
> to fail: I have read a passing comment about EC pools having data corruption, but we are using replicated pools.
> >
> > Is this something that is safe to do?
> >
> > Otherwise I have noted the read proxy mode of cache tiers which is documented as a mechanism to transition from write back to
> disabled, however the documentation is even sparser on this than forward mode. Would this be a better approach if there is some
> unsupported behaviour in the forward mode cache option?
> >
> > Any thoughts would be appreciated - we really cannot afford to corrupt the data, and I really do not want to have to do some
> manual software based eviction on this data.
> >
> > regards
> >
> > Stuart
> >
> >
> > − Stuart Harland:
> > Infrastructure Engineer
> > Email: s.harland@livelinktechnology.net <mailto:s.harland@livelinktechnology.net >
> >
> >
> >
> > LiveLink Technology Ltd
> > McCormack House
> > 56A East Street
> > Havant
> > PO9 1BS
> >
> > IMPORTANT: The information transmitted in this e-mail is intended only for the person or entity to whom it is addressed and may
> contain confidential and/or privileged information. If you are not the intended recipient of this message, please do not read, copy, use
> or disclose this communication and notify the sender immediately. Any review, retransmission, dissemination or other use of, or
> taking any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. Any views or
> opinions presented in this e-mail are solely those of the author and do not necessarily represent those of LiveLink. This e-mail
> message has been checked for the presence of computer viruses. However, LiveLink is not able to accept liability for any damage
> caused by this e-mail.
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com