Re: x pgs not deep-scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You are limited by your drives so not much can be done but it should
alt least catch up a bit and reduce the number of pgs that have not
been deep scrubbed in time.


On Wed, Apr 3, 2019 at 8:13 PM Michael Sudnick
<michael.sudnick@xxxxxxxxx> wrote:
>
> Hi Alex,
>
> I'm okay myself with the number of scrubs performed, would you expect tweaking any of those values to let the deep-scrubs finish in time/
>
> Thanks,
>   Michael
>
> On Wed, 3 Apr 2019 at 10:30, Alexandru Cucu <me@xxxxxxxxxxx> wrote:
>>
>> Hello,
>>
>> You can increase *osd scrub max interval* and *osd deep scrub
>> interval* if you don't want at least one scrub/deep scrub per week.
>>
>> I would also play with *osd max scrubs* and *osd scrub load threshold*
>> to do more scrubbing work, but be careful as it will have a huge
>> impact on performance.
>>
>> ---
>> Alex Cucu
>>
>> On Wed, Apr 3, 2019 at 3:46 PM Michael Sudnick
>> <michael.sudnick@xxxxxxxxx> wrote:
>> >
>> > Hello, was on IRC yesterday about this and got some input, but haven't figured out a solution yet. I have a 5 node, 41 OSD cluster which currently has the warning "295 pgs not deep-scrubbed in time". The number slowly increases as deep scrubs happen. In my cluster I'm primarily using 5400 RPM 2.5" disks, and that's my general bottleneck. Processors are 8/16 core Intel® Xeon processor D-1541. 8 OSDs per node (one has 9), and each node hosts a MON, MGR and MDS.
>> >
>> > My CPU usage is low, it's a very low traffic cluster, just a home lab. CPU usage rarely spikes around 30%. RAM is fine, each node has 64GiB, and only about 33GiB is used. Network is overkill, 2x1GbE public, and 2x10GbE cluster. Disk %util when deep scrubs are happening can hit 80%, so that seems to be my bottleneck.
>> >
>> > I am running Nautilus 14.2.0. I've been running fine since release up to about 3 days ago where I had a disk die and replaced it.
>> >
>> > Any suggestions on what I can do? Thank you for any suggestions.
>> >
>> > -Michael
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux