Re: ceph recovery killing vms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can change some OSD tunables to lower the priority of backfills:

        osd recovery max chunk:   8388608
        osd recovery op priority: 2

In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recovery time and not starving client requests. I've found 2
to work well on our clusters, YMMV.

On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
<Kevin.Weiler@xxxxxxxxxxxxxxx> wrote:
> Hi all,
>
> We have a ceph cluster that being used as a backing store for several VMs
> (windows and linux). We notice that when we reboot a node, the cluster
> enters a degraded state (which is expected), but when it begins to recover,
> it starts backfilling and it kills the performance of our VMs. The VMs run
> slow, or not at all, and also seem to switch it's ceph mounts to read-only.
> I was wondering 2 things:
>
> Shouldn't we be recovering instead of backfilling? It seems like backfilling
> is much more intensive operation
> Can we improve the recovery/backfill performance so that our VMs don't go
> down when there is a problem with the cluster?
>
>
> --
>
> Kevin Weiler
>
> IT
>
>
>
> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606
> | http://imc-chicago.com/
>
> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
> kevin.weiler@xxxxxxxxxxxxxxx
>
>
> ________________________________
>
> The information in this e-mail is intended only for the person or entity to
> which it is addressed.
>
> It may contain confidential and /or privileged material. If someone other
> than the intended recipient should receive this e-mail, he / she shall not
> be entitled to read, disseminate, disclose or duplicate it.
>
> If you receive this e-mail unintentionally, please inform us immediately by
> "reply" and then delete it from your system. Although this information has
> been compiled with great care, neither IMC Financial Markets & Asset
> Management nor any of its related entities shall accept any responsibility
> for any errors, omissions or other inaccuracies in this information or for
> the consequences thereof, nor shall it be bound in any way by the contents
> of this e-mail or its attachments. In the event of incomplete or incorrect
> transmission, please return the e-mail to the sender and permanently delete
> this message and any attachments.
>
> Messages and attachments are scanned for all known viruses. Always scan
> attachments before opening them.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Kyle
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux