Re: ceph recovery killing vms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Kurt,

I wasn't aware of the second page and has been very helpful. However, the osd recovery max chunk doesn't list a unit

osd recovery max chunk

Description: The maximum size of a recovered chunk of data to push.
Type: 64-bit Integer Unsigned
Default: 1 << 20


I assume this is in bytes.



-- 

Kevin Weiler

IT

 

IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.weiler@xxxxxxxxxxxxxxx


From: Kurt Bauer <kurt.bauer@xxxxxxxxxxxx>
Date: Tuesday, November 5, 2013 2:52 PM
To: Kevin Weiler <kevin.weiler@xxxxxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: ceph recovery killing vms



Kevin Weiler schrieb:
Thanks Kyle,

What's the unit for osd recovery max chunk?
Have a look at http://ceph.com/docs/master/rados/configuration/osd-config-ref/ where all the possible OSD config options are described, especially have a look at the backfilling and recovery sections.
Also, how do I find out what my current values are for these osd options?
Have a look at http://ceph.com/docs/master/rados/configuration/ceph-conf/#viewing-a-configuration-at-runtime

Best regards,
Kurt
--

Kevin Weiler

IT


IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.weiler@xxxxxxxxxxxxxxx







On 10/28/13 6:22 PM, "Kyle Bader" <kyle.bader@xxxxxxxxx> wrote:

You can change some OSD tunables to lower the priority of backfills:

       osd recovery max chunk:   8388608
       osd recovery op priority: 2

In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recovery time and not starving client requests. I've found 2
to work well on our clusters, YMMV.

On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
<Kevin.Weiler@xxxxxxxxxxxxxxx> wrote:
Hi all,

We have a ceph cluster that being used as a backing store for several
VMs
(windows and linux). We notice that when we reboot a node, the cluster
enters a degraded state (which is expected), but when it begins to
recover,
it starts backfilling and it kills the performance of our VMs. The VMs
run
slow, or not at all, and also seem to switch it's ceph mounts to
read-only.
I was wondering 2 things:

Shouldn't we be recovering instead of backfilling? It seems like
backfilling
is much more intensive operation
Can we improve the recovery/backfill performance so that our VMs don't
go
down when there is a problem with the cluster?


--

Kevin Weiler

IT



IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606
| http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.weiler@xxxxxxxxxxxxxxx


________________________________

The information in this e-mail is intended only for the person or
entity to
which it is addressed.

It may contain confidential and /or privileged material. If someone
other
than the intended recipient should receive this e-mail, he / she shall
not
be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us
immediately by
"reply" and then delete it from your system. Although this information
has
been compiled with great care, neither IMC Financial Markets & Asset
Management nor any of its related entities shall accept any
responsibility
for any errors, omissions or other inaccuracies in this information or
for
the consequences thereof, nor shall it be bound in any way by the
contents
of this e-mail or its attachments. In the event of incomplete or
incorrect
transmission, please return the e-mail to the sender and permanently
delete
this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan
attachments before opening them.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--

Kyle

________________________________

The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
Kurt Bauer <kurt.bauer@xxxxxxxxxxxx>
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel: ++43 1 4277 - 14070 (Fax: - 9140)  KB1970-RIPE



The information in this e-mail is intended only for the person or entity to which it is addressed.

It may contain confidential and /or privileged material. If someone other than the intended recipient should receive this e-mail, he / she shall not be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by "reply" and then delete it from your system. Although this information has been compiled with great care, neither IMC Financial Markets & Asset Management nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. In the event of incomplete or incorrect transmission, please return the e-mail to the sender and permanently delete this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan attachments before opening them.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux