Re: 2x replication: A BIG warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/07/16 14:58, Wido den Hollander wrote:
>> Op 7 december 2016 om 11:29 schreef Kees Meijs <kees@xxxxxxxx>:
>>
>>
>> Hi Wido,
>>
>> Valid point. At this moment, we're using a cache pool with size = 2 and
>> would like to "upgrade" to size = 3.
>>
>> Again, you're absolutely right... ;-)
>>
>> Anyway, any things to consider or could we just:
>>
>>  1. Run "ceph osd pool set cache size 3".
>>  2. Wait for rebalancing to complete.
>>  3. Run "ceph osd pool set cache min_size 2".
>>
> Indeed! It is a simple as that.
>
> Your cache pool can also contain very valuable data you do not want to loose.
>
> Wido
Almost as simple as that...

First make sure there is free space. Then when you run it, also monitor
that there are no side effects... bad performance, blocked requests, etc.

And if there are issues, be ready to stop it with:
    ceph osd set nobackfill
    ceph osd set norecover

And then figure out some tuning... eg. (very minimal settings)
    # more than likely you can handle more than 1 on a small cluster and
maybe much more
    ceph tell osd.* injectargs --osd_max_backfills=1
    # manuals/emails/something I read suggest a number like 0.05 ... I
find that does nothing in times of real trouble, but this really slows
down recovery
    ceph tell osd.* injectargs --osd_recovery_sleep=0.6
    ceph osd set noscrub
    ceph osd set nodeep-scrub
    # also I think this one is highly relevant, but not sure what to
suggest for it... others suggest 12[1] to 16[2], and so far I found 8
works better than 12-32 for my frequent "blocked requests" small cluster
    # --osd_op_threads=...

And then unset those flags to resume it. And when done, consider
unsetting your new settings (I would unset the noscrub at least).

[1]
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/a-year-with-cinder-and-ceph-at-twc
[2] http://www.spinics.net/lists/ceph-users/msg32368.html  somewhere in
this thread, but can't find it online  "We have recently increase osd op
threads from 2 (default value) to 16 because CPU usage on DN was very
low. We have the impression it has increased overall ceph cluster
performances and reduced block ops occurrences."

-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx
Internet: http://www.brockmann-consult.de
--------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux