Re: Clients' connection for concurrent access to ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 22, 2015 at 8:39 PM, Shneur Zalman Mattern
<shzama@xxxxxxxxxxxx> wrote:
> Workaround... We're building now a huge computing cluster 140 computing
> DISKLESS nodes and they are pulling to storage a lot of computing data
> concurrently
>     User that put job for the cluster - need also access to the same storage
> place (seeking progress & results)
>
> We've built Ceph cluster:
>     3 mon nodes (one of them is combined with mds)
>     3 osd nodes (each one have 10 osd + ssd for journaling)
>     switch 24 ports x 10G
>     10 gigabit - for public network
>     20 gigabit bonding - between osds
>     Ubuntu 12.04.05
>     Ceph 0.87.2 - giant
> -----------------------------------------------------
> Clients has:
>     10 gigabit for ceph-connection
>     CentOS 6.6 with upgraded kernel 3.19.8 (already running computing
> cluster)
>
> Surely all nodes, switches and clients were configured to jumbo-frames of
> network
>
> =========================================================================
>
> First test:
>     I thought to make big rbd with shareing, but:
>           -  RBD supports multiple clients' mapping&mounting but not
> parallel writes ...
>
> Second test:
>     NFS over RBD - it's working pretty good, but:
>         1. NFS gateway - it's Single-Point-of-Failure
>         2. There's no performance scaling of scale-out storage e.g.
> bottleneck (limited with bandwidth of NFS-gateway)
>
> Third test:
>     We wanted to try CephFS, because our client is familiar with Lustre,
> that's very near to CephFS capabilities:
>            1. I've used my CEPH nodes in the client's role. I've mounted
> CephFS on one of nodes, and ran dd with bs=1M ...
>                     - I've got wonderful write performance ~ 1.1 GBytes/s
> (really near to 10Gbit network throughput)
>
>             2. I've connected CentOS client to 10gig public network, mounted
> CephFS, but ...
>                     - It was just ~ 250 MBytes/s
>
>             3. I've connected Ubuntu client (non-ceph member) to 10gig
> public network, mounted CephFS, and ...
>                     - It was also ~ 260 MBytes/s
>
>             Now I have to know: perhaps ceph-members-nodes have privileged
> access ???

There's nothing in the Ceph system that would do this directly. My
first guess is that you're seeing the impact of write latencies (as
opposed to bandwidth) on your system. What is the network latency from
each node you've used as a client to the Ceph system? Exactly what dd
command are you using? How are you mounting CephFS?

Are you sure your network is functioning as expected? Run iperf
(preferably, on all your nodes simultaneously) and verify the results.

Separately, be aware that CephFS is generally not a supported
technology right now.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux