Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thanks for the reply.

Yes, 4MB is the default. I have tried it. For example below (posted) is for
4MB (default) ran for 600 seconds. The seq read and rand read gives me a
good bandwidth (not posted here). But with write its still very less. And I
am particularly interested in block sizes. And rados bench tool has block
size option which I have been using.

Total time run:         601.106
Total writes made:      2966
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     19.7369
Stddev Bandwidth:       14.8408
Max bandwidth (MB/sec): 64
Min bandwidth (MB/sec): 0
Average IOPS:           4
Stddev IOPS:            3.67408
Max IOPS:               16
Min IOPS:               0
Average Latency(s):     3.24064
Stddev Latency(s):      2.75111
Max latency(s):         42.4551
Min latency(s):         0.167701

On Wed, Feb 10, 2021 at 9:46 AM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

>
> try 4MB that is the default not?
>
>
>
> > -----Original Message-----
> > Sent: 10 February 2021 09:30
> > To: ceph-users <ceph-users@xxxxxxx>; dev <dev@xxxxxxx>; ceph-qa@xxxxxxx
> > Subject:  struggling to achieve high bandwidth on Ceph dev
> > cluster - HELP
> >
> > Hi,
> >
> > Hello I am using rados bench tool. Currently I am using this tool  on
> > the
> > development cluster after running vstart.sh script. It is working fine
> > and
> > I am interested in benchmarking the cluster. However I am struggling to
> > achieve a good bandwidth i.e. bandwidth (MB/sec).  My target throughput
> > is
> > at least 50 MB/sec and more. But mostly I am achieving is around 15-20
> > MB/sec. So, very poor.
> >
> > I am quite sure I am missing something. Either I have to change my
> > cluster
> > through vstart.sh script or I am not fully utilizing the rados bench
> > tool.
> > Or may be both. i.e. not the right cluster and also not using the rados
> > bench tool correctly.
> >
> > Some of the shell examples I have been using to build the cluster are
> > bellow:
> > MDS=0 RGW=1 ../src/vstart.sh -d -l -n --bluestore
> > MDS=0 RGW=1 MON=1 OSD=4../src/vstart.sh -d -l -n --bluestore
> >
> > While using rados bench tool I have been trying with different block
> > sizes
> > 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K. And I have also been changing
> > the
> > -t parameter in the shell to increase concurrent IOs.
> >
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux