Re: Ceph RBD - High IOWait during the Writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am not sure any configuration tuning would help here.
The bottleneck is on HDD. In my case, I have a SSD for
WAL/DB and it provides pretty good write performance.
The part I don't quite understand in your case is that,
random read is quite fast. Due to the HDD seeking latency,
the random read is normally slow. Not sure how it's so fast
in your case.

Tony
> -----Original Message-----
> From: athreyavc <athreyavc@xxxxxxxxx>
> Sent: Tuesday, November 17, 2020 8:40 AM
> Cc: ceph-users <ceph-users@xxxxxxx>
> Subject:  Re: Ceph RBD - High IOWait during the Writes
> 
> I disabled the CephX authentication now. Though the Performance is
> Slightly better, it is not yet there.
> 
> Are there any other recommendations for all HDD ceph clusters ?
> 
> From another thread
> https://lists.ceph.io/hyperkitty/list/ceph-
> users@xxxxxxx/thread/DFHXXN4KKI5PS7LYPZJO4GYHU67JYVVL/
> 
> 
> *In our test based v15.2.2, i found
> osd_numa_prefer_iface/osd_numa_auto_affinity make onlyehalf cpu used.
> for 4K RW, it make performance drop much. So you can check this
> whetheroccur.*
> 
> I do see "set_numa_affinity unable to identify cluster interface" alerts.
> But I am not sure that is a cause for concern.
> 
> Thanks and regards,
> 
> Athreya
> 
> On Thu, Nov 12, 2020 at 1:30 PM athreyavc <athreyavc@xxxxxxxxx> wrote:
> 
> > Hi,
> >
> > Thanks for the email, But we are not using RAID at all, we are using
> > HBAs LSI HBA 9400-8e. Eash HDD is configured as an OSD.
> >
> > On Thu, Nov 12, 2020 at 12:19 PM Edward kalk <ekalk@xxxxxxxxxx> wrote:
> >
> >> for certain CPU architecture, disable spectre and meltdown
> mitigations.
> >> (be certain network to physical nodes is secure from internet access)
> >> (use apt proxy, http(s), curl proxy servers) Try to toggle on or off
> >> the physical on disk cache. (raid controller
> >> command)
> >> ^I had same issue, doing both of these fixed it. In my case the disks
> >> I had needed on disk cache hard set to ‘on’. raid card default was
> not good.
> >> (be sure to have diverse power and UPS protection if needed to run on
> >> disk cache on) (good RAID. battery if using raid cache improves
> >> perf.)
> >>
> >> to see the perf impact of spec. and melt. mitigation vs. off, run: dd
> >> if=/dev/zero of=/dev/null ^i run for 5 seconds and then ctl+c will
> >> show a max north bridge ops.
> >>
> >> to see the difference in await and IOPs when toggle RAID card
> >> features and on disk cache I run: iostat -xtc 2 and use fio to
> >> generate disk load for testing IOPs. (google fio example
> >> commands)
> >> ^south bridge +raid controller to disks ops and latency.
> >>
> >> -Edward Kalk
> >> Datacenter Virtualization
> >> Performance Engineering
> >> Socket Telecom
> >> Columbia, MO, USA
> >> ekalk@xxxxxxxxxx
> >>
> >> > On Nov 12, 2020, at 4:45 AM, athreyavc <athreyavc@xxxxxxxxx> wrote:
> >> >
> >> > Jumbo frames enabled  and MTU is 9000
> >>
> >>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux