Not a single scrub in my case.
Steve Taylor |
Senior Software Engineer |
StorageCraft
Technology Corporation 380 Data Drive Suite 300 | Draper | Utah | 84020 Office: 801.871.2799 | |
If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited. |
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ruben Kerkhof
Sent: Wednesday, December 7, 2016 3:34 PM
To: Francois Lafont <francois.lafont.1978@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: 10.2.4 Jewel released
On Wed, Dec 7, 2016 at 11:20 PM, Francois Lafont <francois.lafont.1978@xxxxxxxxx> wrote:
> On 12/07/2016 11:16 PM, Steve Taylor wrote:
>> I'm seeing the same behavior with very similar perf top output. One server with 32 OSDs has a load average approaching 800. No excessive memory usage and no iowait at all.
>
> Exactly!
>
> And another interesting information (maybe). I have ceph-osd process with big cpu load (as Steve said no iowait and no excessive memory usage). If I restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes for me. After 15 minutes, I have the cpu load again. It's curious this number of 15 minutes, isn't it?
Thanks, l'll check how long it takes for this to happen on my cluster.
I did just pause scrub and deep-scrub. Are there scrubs running on your cluster now by any chance?
Kind regards,
Ruben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ruben Kerkhof
Sent: Wednesday, December 7, 2016 3:34 PM
To: Francois Lafont <francois.lafont.1978@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: 10.2.4 Jewel released
On Wed, Dec 7, 2016 at 11:20 PM, Francois Lafont <francois.lafont.1978@xxxxxxxxx> wrote:
> On 12/07/2016 11:16 PM, Steve Taylor wrote:
>> I'm seeing the same behavior with very similar perf top output. One server with 32 OSDs has a load average approaching 800. No excessive memory usage and no iowait at all.
>
> Exactly!
>
> And another interesting information (maybe). I have ceph-osd process with big cpu load (as Steve said no iowait and no excessive memory usage). If I restart the ceph-osd daemon cpu load becomes OK during exactly 15 minutes for me. After 15 minutes, I have the cpu load again. It's curious this number of 15 minutes, isn't it?
Thanks, l'll check how long it takes for this to happen on my cluster.
I did just pause scrub and deep-scrub. Are there scrubs running on your cluster now by any chance?
Kind regards,
Ruben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com