Re: Performance Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How are you measuring the performance when using CephFS?

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am Fr., 7. Dez. 2018 um 18:34 Uhr schrieb Scharfenberg, Buddy <blspcy@xxxxxxx>:
>
> Hello all,
>
>
>
> I’m new to Ceph management, and we’re having some performance issues with a basic cluster we’ve set up.
>
>
>
> We have 3 nodes set up, 1 with several large drives, 1 with a handful of small ssds, and 1 with several nvme drives. We have 46 OSDs in total, a healthy FS being served out, and 1024 pgs split over metadata and data pools.  I am having performance problems on the clients which I’ve been unable to nail down to the cluster itself and could use some guidance. I am seeing around 600MB/s out of each pool using rados bench, however I’m only seeing around 6MB/s direct transfer from clients using fuse and 30MB/s using the kernel client. I’ve asked over in IRC and have been told essentially that my performance will be tied to our lowest performing OSD speed / ( 2 * ${num_rep} ) and I have numbers which reflect that as my lowest performing disks are 180 MB/s according to osd bench and my writes are down around 30MB/s at best, with replication at 3. (180/(2*3)=30)
>
>
>
> What I was wondering is what, if anything I can do to get performance for the individual clients near at least the write performance of my slowest OSDs. Also given the constraints I have on most of my clients, how can I get better performance out of the ceph-fuse client?
>
>
>
> Thanks,
>
> Buddy.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux