Re: librbd 4k read/write?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Em qui., 10 de ago. de 2023 às 12:47, Hans van den Bogert <
hansbogert@xxxxxxxxx> escreveu:

> On Thu, Aug 10, 2023, 17:36 Murilo Morais <murilo@xxxxxxxxxxxxxx> wrote:
>
> > Good afternoon everybody!
> >
> > I have the following scenario:
> > Pool RBD replication x3
> > 5 hosts with 12 SAS spinning disks each
> >
> > I'm using exactly the following line with FIO to test:
> > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
> > -iodepth=16 -rw=write -filename=./test.img
> >
> > If I increase the blocksize I can easily reach 1.5 GBps or more.
> >
> > But when I use blocksize in 4K I get a measly 12 Megabytes per second,
> >
> This is 3000iops. I would call that bad for 60 drives and a replication of
> 3. Which amount of iops did you expect?
>
> which is quite annoying. I achieve the same rate if rw=read.
> >
> > If I use librbd's cache I get a considerable improvement in writing, but
> > reading remains the same.
> >
> > I already tested with rbd_read_from_replica_policy=balance but I didn't
> > notice any difference. I tried to leave readahead enabled by setting
> > rbd_readahead_disable_after_bytes=0 but I didn't see any difference in
> > sequential reading either.
> >
> > Note: I tested it on another smaller cluster, with 36 SAS disks and got
> the
> > same result.
> >
> This I concur is a weird result compared to 60 disks. Are you using the
> same disks and all other parameters the same, like the replication factor?
> Is the performance really the same? Maybe the 5 host cluster is not
> saturated by your current fio test. Try running 2 or 4 in parallel.
>
Yes is yes. I will try with others in parallel and compare the results.

>
> >
> > I don't know exactly what to look for or configure to have any
> improvement.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux