Re: Using CephFS in LXD containers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a project using cephfs (ceph-fuse) in kubernetes containers.  For us the throughput was limited by the mount point and not the cluster.  Having a single mount point for each container would cap with the throughput of a single mount point.  We ended up mounting cephfs inside of the containers.  The initial reason we used kubernetes for cephfs was multi-tenancy benchmarking and we found that a single mount point vs 20 mount points all had the same throughput for our infrastructure (so 20 mounts points was 20x more throughput than 1 mount point).  It wasn't until we got up to about 100 concurrent mount points that we capped our throughput, but our total throughput just kept going up the more mount points we had of ceph-fuse for cephfs.

On Tue, Dec 12, 2017 at 12:06 PM Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:
Hello, everyone!

We have recently started to use CephFS (Jewel, v12.2.1) from a few LXD containers. We have mounted it on the host servers and then exposed it in the LXD containers.

Do you have any recommendations (dos and don'ts) on this way of using CephFS?

Thank you, in advance!

Kind regards,
Bogdan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux