Re: CEPH 16.2.x: disappointing I/O performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've initially disabled power-saving features, which nicely improved the
network latency.

Btw, the first interesting find: I enabled 'rbd_balance_parent_reads' on
the clients, and single-thread reads now scale much better, I routinely get
similar readings from a single disk doing 4k reads with 1 thread:

Run status group 0 (all jobs):
   READ: bw=323MiB/s (339MB/s), 323MiB/s-323MiB/s (339MB/s-339MB/s),
io=18.9GiB (20.3GB), run=60001-60001msec
Disk stats (read/write):
  vdc: ios=77451/0, merge=0/0, ticks=80269/0, in_queue=19964, util=97.94%

No more 50 MB/s reads, yay! :-)

The option, which I found in Redhat's docs, suggests that "Ceph typically
reads objects from the primary OSD. Since reads are immutable, you may
enable this feature to balance parent reads between the primary OSD and the
replicas."

/Z


On Wed, Oct 6, 2021 at 10:45 AM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

>
>
> > I guess having excessive resources shouldn't hurt performance? :-)
>
> You’d think so — but I’ve seen a situation where it seemed to.
>
> Dedicated mon nodes with dual CPUs far in excess of what they needed.
> C-state flapping appeared to negatively impact the NIC driver and network
> (and mon) performance suffered.
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux