Re: Mapped rbd is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
>
> On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY <olivier@xxxxxxx> wrote:
> > let's test random write
> > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-threads 256 --io-total 10G --io-pattern rand
> > elapsed:   125  ops:  1310720  ops/sec: 10416.31  bytes/sec: 85330446.58
> >
> > dd if=/dev/zero of=test bs=8192k count=100 oflag=direct
> > 838860800 bytes (839 MB, 800 MiB) copied, 24.6185 s, 34.1 MB/s
> >
> > 34.1MB/s vs 85MB/s ....
>
> 34 apples vs. 85 oranges
>
> You are comparing 256 threads with a huge queue depth vs a single
> thread with a normal queue depth.
> Use fio on the mounted rbd to get better control over what it's doing

When you said mounted, did you mean mapped or "a filesystem mounted on
top of a mapped rbd"?

There is no filesystem in "rbd bench" tests, so fio should be used on
a raw block device.  It still won't be completely apples to apples
because in "rbd bench" or fio's rbd engine (--ioengine=rbd) case there
is no block layer either, but it is closer...

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux