Problem setting tunables for ceph firefly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There was a good discussion of this a month ago:
https://www.mail-archive.com/ceph-users%40lists.ceph.com/msg11483.html

That'll give you some things you can try, and information on how to undo it
if it does cause problems.


You can disable the warning by adding this to the [mon] section of
ceph.conf:
  mon warn on legacy crush tunables = false





On Thu, Aug 21, 2014 at 7:17 AM, Gerd Jakobovitsch <gerd at mandic.net.br>
wrote:

> Dear all,
>
> I have a ceph cluster running in 3 nodes, 240 TB space with 60% usage,
> used by rbd and radosgw clients. Recently I upgraded from emperor to
> firefly, and I got the message about legacy tunables described in
> http://ceph.com/docs/master/rados/operations/crush-map/#tunables. After
> some data rearrangement to minimize risks, I tried to apply the optimal
> settings. This resulted in 28% of object degradation, much more than I
> expected, and worse, I lost communication for the rbd clients, running in
> kernels 3.10 or 3.11.
>
> Searching for a solution, I got to this proposed solution:
> https://www.mail-archive.com/ceph-users at lists.ceph.com/msg11199.html.
> Applying it (before the data was all moved), I got additional 2% of object
> degradation, but the rbd clients came back into working. But then I got a
> large number of degraded or staled PGs, that are not backfilling. Looking
> for the definition of chooseleaf_vary_r, I reached the definition in
> http://ceph.com/docs/master/rados/operations/crush-map/:
> "chooseleaf_vary_r: Whether a recursive chooseleaf attempt will start with
> a non-zero value of r, based on how many attempts the parent has already
> made. Legacy default is 0, but with this value CRUSH is sometimes unable to
> find a mapping. The optimal value (in terms of computational cost and
> correctness) is 1. However, for legacy clusters that have lots of existing
> data, changing from 0 to 1 will cause a lot of data to move; a value of 4
> or 5 will allow CRUSH to find a valid mapping but will make less data move."
>
> Is there any suggestion to handle it? Have I to set chooseleaf_vary_r to
> some other value? Will I lose communication with my rbd clients? Or should
> I return to legacy tunables?
>
> Regards,
>
> Gerd Jakobovitsch
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140821/49a6a2c0/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux