Clients' connection for concurrent access to ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Workaround... We're building now a huge computing cluster 140 computing DISKLESS nodes and they are pulling to storage a lot of computing data concurrently
    User that put job for the cluster - need also access to the same storage place (seeking progress & results)

We've built Ceph cluster:
    3 mon nodes (one of them is combined with mds)
    3 osd nodes (each one have 10 osd + ssd for journaling)
    switch 24 ports x 10G
    10 gigabit - for public network
    20 gigabit bonding - between osds
    Ubuntu 12.04.05
    Ceph 0.87.2 - giant
-----------------------------------------------------
Clients has:
    10 gigabit for ceph-connection
    CentOS 6.6 with upgraded kernel 3.19.8 (already running computing cluster)

Surely all nodes, switches and clients were configured to jumbo-frames of network

=========================================================================

First test:
    I thought to make big rbd with shareing, but:
          -  RBD supports multiple clients' mapping&mounting but not parallel writes ...

Second test:
    NFS over RBD - it's working pretty good, but:
        1. NFS gateway - it's Single-Point-of-Failure
        2. There's no performance scaling of scale-out storage e.g. bottleneck (limited with bandwidth of NFS-gateway) 

Third test:
    We wanted to try CephFS, because our client is familiar with Lustre, that's very near to CephFS capabilities:
           1. I've used my CEPH nodes in the client's role. I've mounted CephFS on one of nodes, and ran dd with bs=1M ...
                    - I've got wonderful write performance ~ 1.1 GBytes/s (really near to 10Gbit network throughput)
            
            2. I've connected CentOS client to 10gig public network, mounted CephFS, but ...
                    - It was just ~ 250 MBytes/s
            
            3. I've connected Ubuntu client (non-ceph member) to 10gig public network, mounted CephFS, and ...
                    - It was also ~ 260 MBytes/s

            Now I have to know: perhaps ceph-members-nodes have privileged access ???

I'm sure you have more ceph deployment experience,
    have you seen this CephFS performance deviations?

Thanks,
Shneur






************************************************************************************
This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses.
************************************************************************************

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux