Re: Pacific: parallel PG reads?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Thu, Nov 11, 2021 at 3:26 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

> Den tors 11 nov. 2021 kl 13:54 skrev Zakhar Kirpichenko <zakhar@xxxxxxxxx
> >:
> > I'm still trying to combat really bad read performance from HDD-backed
> > replicated pools, which is under 100 MB/s most of the time with 1 thread
> > and QD=1. I don't quite understand why the reads are that slow, i.e. much
>
> (doing a single-thread-single-client test on a cluster ..)
>

Doing a single-thread single-client test on a cluster is totally fine if
single-thread single-client performance is important.


> Also, if you map an RBD like when you have the rbd image as an 40G
>
qemu drive for a VM guest, it will get split into 4M pieces anyhow, so
> if the guest decides to read its drive from 0 -> end it will fire off
> 10000 read requests, spread out over the cluster and all OSDs on which
> the pool is placed, so you get load sharing.
>

I guess reading the guest disk from 0 to end sequentially doesn't do any
load sharing. Reading multiple parts of the guest disk simultaneously has a
good chance to.

experience. Also, 100MB/s from a spin drive over a network isn't all
> that bad, given QD=1, since the double network latency will be there
> for linear reads for every block you ask for. If the turn-around times
> over the network is 1ms, then 1000 x IO-size is all you could hope for
> optimally at QD=1.
>

That's not how it works in practice, otherwise I'd be getting
(1000/RTT)*IO-size from SSD drives as well, but that is not the case.


> So while it is easy to visualize a huge improvement by asking more IO
> from what you imagine is idle drives, the normal cluster will be
> spreading tons and tons of IO all kinds of ways all the time, making
> the server IO queue deeper is probably not going to improve the sum of
> all IO that goes to clients.
>

Yes, I realize this. I do have a quiet cluster though :-)

Z
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux