Optimations of cephfs clients on WAN: Looking for suggestions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear CephFS gurus...

I would like your advise on how to improve performance without compromising reliability for CephFS clients deployed under a WAN.

Currently, our infrastructure relies on:
- ceph infernalis
- a ceph object cluster, with all core infrastructure components sitting in the same data centre:
  a./ 8 storage servers (8 OSDs per server running on single spins; 2 SSDs with 4 partitions each for OSD journals)
  b. 3 MONS
- one active MDS and another MDS on standby replay mode, also in the same data centre.
- OSDs, MONs and MDS all natively connected at 10 Gb/s
- cephfs clients mounted via ceph-fuse in different physical geographical locations (different network ranges)
- The communication bottleneck between cephfs clients and the core Ceph/CephFS infrastructure is not the network but the 1 GB Eth cards of some of the hosts where cephfs clients are deployed.

Although this setup is not exactly what we are aiming in the future, for now, I would like to ask for suggestions of what parameters to tune to improve performance without compromising reliability, specially for those cephfs clients under 1 Gb/s links.

In the past I have found some generic article which debated this issue but I am not able to find it now, nor other relevant info.

Help is appreciated.

Thank you for you feedback

Cheers
Goncalo

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux