Re: Ceph RBD - High IOWait during the Writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks for the email, But we are not using RAID at all, we are using HBAs
LSI HBA 9400-8e. Eash HDD is configured as an OSD.

On Thu, Nov 12, 2020 at 12:19 PM Edward kalk <ekalk@xxxxxxxxxx> wrote:

> for certain CPU architecture, disable spectre and meltdown mitigations.
> (be certain network to physical nodes is secure from internet access) (use
> apt proxy, http(s), curl proxy servers)
> Try to toggle on or off the physical on disk cache. (raid controller
> command)
> ^I had same issue, doing both of these fixed it. In my case the disks I
> had needed on disk cache hard set to ‘on’. raid card default was not good.
> (be sure to have diverse power and UPS protection if needed to run on disk
> cache on) (good RAID. battery if using raid cache improves perf.)
>
> to see the perf impact of spec. and melt. mitigation vs. off, run: dd
> if=/dev/zero of=/dev/null
> ^i run for 5 seconds and then ctl+c
> will show a max north bridge ops.
>
> to see the difference in await and IOPs when toggle RAID card features and
> on disk cache I run: iostat -xtc 2
> and use fio to generate disk load for testing IOPs. (google fio example
> commands)
> ^south bridge +raid controller to disks ops and latency.
>
> -Edward Kalk
> Datacenter Virtualization
> Performance Engineering
> Socket Telecom
> Columbia, MO, USA
> ekalk@xxxxxxxxxx
>
> > On Nov 12, 2020, at 4:45 AM, athreyavc <athreyavc@xxxxxxxxx> wrote:
> >
> > Jumbo frames enabled  and MTU is 9000
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux