Re: Nautilus slow using "ceph tell osd.* bench"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jim, when you do reweighting, balancing will be triggered, how did you set it back? Setting back immediately or waiting for balancing to complete? I did try both on my cluster and couldn't see osd bench changed significantly like yours (actually no changes), however, my cluster is 12.2.12, not sure if that is the reason.



Moreover, I really can't figure out why flipping the reweight can make such difference. Hope experts can explain that.





------------------ Original ------------------
From: &nbsp;"Jim Forde";<jimf@xxxxxxxxx&gt;;
Date: &nbsp;Aug 7, 2020
To: &nbsp;"ceph-users"<ceph-users@xxxxxxx&gt;; 

Subject: &nbsp; Re: Nautilus slow using "ceph tell osd.* bench"



SOLUTION FOUND!
Reweight the osd to 0, then set it back to where it belongs.
ceph osd crush reweight osd.0 0.0

Original
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 4.03434 sec at 254 MiB/sec 63 IOPS

After reweight of osd.0
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 1.54555 sec at 663 MiB/sec 165 IOPS
ceph tell osd.1 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 3.54652 sec at 289 MiB/sec 72 IOPS

After reweight of osd.1
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 0.948457 sec at 1.1 GiB/sec 269 IOPS
ceph tell osd.1 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 0.949384 sec at 1.1 GiB/sec 269 IOPS
ceph tell osd.2 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 3.56726 sec at 287 MiB/sec 71 IOPS

I have finished reweight proceedure on osd node 1 and all 6 osd's are back where they belong, but have 4 more nodes to go. Looks like this should fix it. If anyone has an alternative method for getting around this I am all ears.

Dave, would be interested to hear if this works for you.

-Jim
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux