Re: Ceph cluster upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kees,

See http://dachary.org/?p=3189 for some simple instructions on testing your crush rule logic.

Bob

On Wed, Jul 6, 2016 at 7:07 AM, Kees Meijs <kees@xxxxxxxx> wrote:
Hi Micha,

Thank you very much for your prompt response. In an earlier process, I
already ran:
> $ ceph tell osd.* injectargs '--osd-max-backfills 1'
> $ ceph tell osd.* injectargs '--osd-recovery-op-priority 1'
> $ ceph tell osd.* injectargs '--osd-client-op-priority 63'
> $ ceph tell osd.* injectargs '--osd-recovery-max-active 1'

And yes, creating a separate ruleset makes sense. But, does the proposed
ruleset itself make sense as well?

Regards,
Kees

On 06-07-16 15:36, Micha Krause wrote:
> Set these in your ceph.conf beforehand:
>
> osd recovery op priority = 1
> osd max backfills        = 1
>
> I would allso suggest creating a new crush rule, instead of modifying
> your existing one.
>
> This enables you to change the rule on a per pool basis:
>
> ceph osd pool set <poolname> crush_rulenum <number>
>
> Then start with your smallest pool, and see how it goes.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux