Re: Performance Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That creates IO with a queue depth of 1, so you are effectively
measuring latency and not bandwidth.

30 mb/s would be ~33ms of latency on average (a little bit less
because it still needs to do the actual IO).
Assuming you distribute to all 3 servers: each IO will have to wait
for one of your "large drives" which seem to have a latency of 33 ms
on average for a 1 MB write which is ~5ms for the write (from your 180
MB/s) and ~28 ms latency.
Which seems like a reasonable result.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am Fr., 7. Dez. 2018 um 19:38 Uhr schrieb Scharfenberg, Buddy <blspcy@xxxxxxx>:
>
> `dd if=/dev/zero of=/mnt/test/writetest bs=1M count=1000 oflag=dsync`
>
> -----Original Message-----
> From: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
> Sent: Friday, December 07, 2018 12:31 PM
> To: Scharfenberg, Buddy <blspcy@xxxxxxx>
> Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Performance Problems
>
> What are the exact parameters you are using? I often see people using dd in a way that effectively just measures write latency instead of throughput.
> Check out fio as a better/more realistic benchmarking tool.
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> Am Fr., 7. Dez. 2018 um 19:05 Uhr schrieb Scharfenberg, Buddy <blspcy@xxxxxxx>:
> >
> > I'm measuring with dd writing from /dev/zero with a size of 1 MB  1000 times to get client write speeds.
> >
> > -----Original Message-----
> > From: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
> > Sent: Friday, December 07, 2018 11:52 AM
> > To: Scharfenberg, Buddy <blspcy@xxxxxxx>
> > Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
> > Subject: Re:  Performance Problems
> >
> > How are you measuring the performance when using CephFS?
> >
> > Paul
> >
> > --
> > Paul Emmerich
> >
> > Looking for help with your Ceph cluster? Contact us at
> > https://croit.io
> >
> > croit GmbH
> > Freseniusstr. 31h
> > 81247 München
> > www.croit.io
> > Tel: +49 89 1896585 90
> >
> > Am Fr., 7. Dez. 2018 um 18:34 Uhr schrieb Scharfenberg, Buddy <blspcy@xxxxxxx>:
> > >
> > > Hello all,
> > >
> > >
> > >
> > > I’m new to Ceph management, and we’re having some performance issues with a basic cluster we’ve set up.
> > >
> > >
> > >
> > > We have 3 nodes set up, 1 with several large drives, 1 with a
> > > handful of small ssds, and 1 with several nvme drives. We have 46
> > > OSDs in total, a healthy FS being served out, and 1024 pgs split
> > > over metadata and data pools.  I am having performance problems on
> > > the clients which I’ve been unable to nail down to the cluster
> > > itself and could use some guidance. I am seeing around 600MB/s out
> > > of each pool using rados bench, however I’m only seeing around 6MB/s
> > > direct transfer from clients using fuse and 30MB/s using the kernel
> > > client. I’ve asked over in IRC and have been told essentially that
> > > my performance will be tied to our lowest performing OSD speed / ( 2
> > > * ${num_rep} ) and I have numbers which reflect that as my lowest
> > > performing disks are 180 MB/s according to osd bench and my writes
> > > are down around 30MB/s at best, with replication at 3.
> > > (180/(2*3)=30)
> > >
> > >
> > >
> > > What I was wondering is what, if anything I can do to get performance for the individual clients near at least the write performance of my slowest OSDs. Also given the constraints I have on most of my clients, how can I get better performance out of the ceph-fuse client?
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Buddy.
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux