Re: CephFS Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Readding list:

So with email, you're talking about lots of small reads and writes. In my experience with dicom data (thousands of 20KB files per directory), cephfs doesn't perform very well at all on platter drivers. I haven't experimented with pure ssd configurations, so I can't comment on that. 

Somebody may correct me here, but small block io on writes just makes latency all that much more important due to the need to wait for your replicas to be written before moving on to the next block. 

Without know exact hardware details, my brain is immediately jumping to networking constraints. 2 or 3 spindle drives can pretty much saturate a 1gbps link. As soon as you create contention for that resource, you create system load for iowait and latency. 

You mentioned you don't control the network. Maybe you can scale down and out. 


On May 9, 2017 5:38 PM, "Webert de Souza Lima" <webert.boss@xxxxxxxxx> wrote:

On Tue, May 9, 2017 at 4:40 PM, Brett Niver <bniver@xxxxxxxxxx> wrote:
What is your workload like?  Do you have a single or multiple active
MDS ranks configured?

User traffic is heavy. I can't really say in terms of mb/s or iops but it's an email server with 25k+ users, usually about 6k simultaneously connected receiving and reading emails.
I have only one active MDS configured. The others are Stand-by.

On Tue, May 9, 2017 at 7:18 PM, Wido den Hollander <wido@xxxxxxxx> wrote:

> Op 9 mei 2017 om 20:26 schreef Brady Deetz <bdeetz@xxxxxxxxx>:
>
>
> If I'm reading your cluster diagram correctly, I'm seeing a 1gbps
> interconnect, presumably cat6. Due to the additional latency of performing
> metadata operations, I could see cephfs performing at those speeds. Are you
> using jumbo frames? Also are you routing?
>
> If you're routing, the router will introduce additional latency that an l2
> network wouldn't experience.
>

Partially true. I am running various Ceph clusters using L3 routing and with a decent router the latency for routing a packet is minimal, like 0.02 ms or so.

Ceph spends much more time in the CPU then it will take the network to forward that IP-packet.

I wouldn't be too afraid to run Ceph over a L3 network.

Wido

> On May 9, 2017 12:01 PM, "Webert de Souza Lima" <webert.boss@xxxxxxxxx>
> wrote:
>
> > Hello all,
> >
> > I'm been using cephfs for a while but never really evaluated its
> > performance.
> > As I put up a new ceph cluster, I though that I should run a benchmark to
> > see if I'm going the right way.
> >
> > By the results I got, I see that RBD performs *a lot* better in
> > comparison to cephfs.
> >
> > The cluster is like this:
> >  - 2 hosts with one SSD OSD each.
> >        this hosts have 2 pools: cephfs_metadata and cephfs_cache (for
> > cache tiering).
> >  - 3 hosts with 5 HDD OSDs each.
> >       this hosts have 1 pool: cephfs_data.
> >
> > all details, cluster set up and results can be seen here:
> > https://justpaste.it/167fr
> >
> > I created the RBD pools the same way as the CEPHFS pools except for the
> > number of PGs in the data pool.
> >
> > I wonder why that difference or if I'm doing something wrong.
> >
> > Regards,
> >
> > Webert Lima
> > DevOps Engineer at MAV Tecnologia
> > *Belo Horizonte - Brasil*
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux