Re: Optimations of cephfs clients on WAN: Looking for suggestions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 21, 2016 at 8:35 PM, Goncalo Borges
<goncalo.borges@xxxxxxxxxxxxx> wrote:
> Dear CephFS gurus...
>
> I would like your advise on how to improve performance without compromising
> reliability for CephFS clients deployed under a WAN.
>
> Currently, our infrastructure relies on:
> - ceph infernalis
> - a ceph object cluster, with all core infrastructure components sitting in
> the same data centre:
>   a./ 8 storage servers (8 OSDs per server running on single spins; 2 SSDs
> with 4 partitions each for OSD journals)
>   b. 3 MONS
> - one active MDS and another MDS on standby replay mode, also in the same
> data centre.
> - OSDs, MONs and MDS all natively connected at 10 Gb/s
> - cephfs clients mounted via ceph-fuse in different physical geographical
> locations (different network ranges)
> - The communication bottleneck between cephfs clients and the core
> Ceph/CephFS infrastructure is not the network but the 1 GB Eth cards of some
> of the hosts where cephfs clients are deployed.
>
> Although this setup is not exactly what we are aiming in the future, for
> now, I would like to ask for suggestions of what parameters to tune to
> improve performance without compromising reliability, specially for those
> cephfs clients under 1 Gb/s links.

Does that mean you have a faster interconnect between all points, and
it's just the clients have a slow NIC?

In any case, there's not a lot of stuff that's obviously correlated to
this geographic distribution. You could consider doing things like
changing the MDS-side timeouts and the client-side reporting
intervals, but really CephFS is largely developed on 1Gig links and
the biggest factor is likely to be your latency when doing stuff like
listing files (and needing to get updates from remote clients). If you
run into specific problems it's probably easier to try and deal with
those than predict them ahead of time.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux