Re: How to add 100 new OSDs...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 2, 2019 at 6:57 PM Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
>
> On Fri, Jul 26, 2019 at 1:02 PM Peter Sabaini <peter@xxxxxxxxxx> wrote:
>>
>> On 26.07.19 15:03, Stefan Kooman wrote:
>> > Quoting Peter Sabaini (peter@xxxxxxxxxx):
>> >> What kind of commit/apply latency increases have you seen when adding a
>> >> large numbers of OSDs? I'm nervous how sensitive workloads might react
>> >> here, esp. with spinners.
>> >
>> > You mean when there is backfilling going on? Instead of doing "a big
>>
>> Yes exactly. I usually tune down max rebalance and max recovery active
>> knobs to lessen impact but still I found the additional write load can
>> substantially increase i/o latencies. Not all workloads like this.
>
>
> We have been using:
>
> osd op queue = wpq
> osd op queue cut off = high
>
> It virtually eliminates the impact of backfills on our clusters. Our backfill and recovery times have increased when the cluster has lots of client I/O, but the clients haven't noticed that huge backfills have been going on.
>
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Would this be superior to setting:

osd_recovery_sleep = 0.5 (or some high value)


--
Alex Gorbachev
Intelligent Systems Services Inc.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux