Re: CEPH 16.2.x: disappointing I/O performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 6 okt. 2021 kl 10:11 skrev Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
>
> I've initially disabled power-saving features, which nicely improved the
> network latency.
>
> Btw, the first interesting find: I enabled 'rbd_balance_parent_reads' on
> the clients, and single-thread reads now scale much better, I routinely get
> similar readings from a single disk doing 4k reads with 1 thread:
>
> Run status group 0 (all jobs):
>    READ: bw=323MiB/s (339MB/s), 323MiB/s-323MiB/s (339MB/s-339MB/s),
> io=18.9GiB (20.3GB), run=60001-60001msec
> Disk stats (read/write):
>   vdc: ios=77451/0, merge=0/0, ticks=80269/0, in_queue=19964, util=97.94%
>
> No more 50 MB/s reads, yay! :-)
>
> The option, which I found in Redhat's docs, suggests that "Ceph typically
> reads objects from the primary OSD. Since reads are immutable, you may
> enable this feature to balance parent reads between the primary OSD and the
> replicas."

Do mind that while this might be very good for benchmarks while the
cluster is somewhat quiet, it might be worse-or-not-better when you
have 100 clients pushing IO requests to the cluster, since by then the
sum of IOs is already spread over all the OSDs, so there might be less
or no "idle" OSDs to request from.

But it is nice if you can test this before and after real load hits
the cluster, for the science. =)


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux