Re: yet another deep-scrub performance topic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you all for your input.
My best guess at the moment is that deep-scrub performs as it should, and the issue is that it just has no limits on its performance, so it uses all the OSD time it can. Even if it has lower priority than client IO, it still can fill disk queue, and effectively bottleneck whole operation.

By the way my installation is 12.2.2, upgraded from 10.2 release, I use bluestore OSDs with blockdb/WAL on separate SSD device(one per 4-6 spinners), spinners are a mix of 7200RPM SATA and 15kRPM SAS, we're in a process of switching to all 15k.

> find ways to fix that (seperate block.db SSD's for instance might help)
I already have those, but for the sake of argument - how would in help even in theory? If I'm not mistaken, blockdb is related to metadata only, while deep-scrub operates on the data, and has to perfrom reads on your actual data OSDs. deep-scrub has very little to do with metadata, this is what ordinary scrub there for. Best I can imagine is that those few reads on metadata that deep-scrub does would go to separate device and lower the IO load on actual data drive, but it's a tiny droplet in the ocean of IOs it will still have to perform, so the impact would be negligible.

For now I have unset nodeep-scrub and no-scrub from cluster, and set nodeep-scrub true for my spinner-based pools. Will wait some time to see if I have some slow requests or any other performance issues without deep-scrub.
Meanwhile I would be interested to see your performance metrics on spinner OSDs while deep-scrub is running. Does it consume all available OSD time or not?

вт, 11 дек. 2018 г. в 15:24, Janne Johansson <icepic.dz@xxxxxxxxx>:
Den tis 11 dec. 2018 kl 12:54 skrev Caspar Smit <casparsmit@xxxxxxxxxxx>:
>
> On a Luminous 12.2.7 cluster these are the defaults:
> ceph daemon osd.x config show

thank you very much.


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux