Re: cephfs mount on osd node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The problem with mounting an RBD or CephFS on an OSD node is if you're doing so with the kernel client.  In a previous message on the ML John Spray explained this wonderfully.

  "This is not a Ceph-specific thing -- it can also affect similar systems like Lustre.  The classic case is when under some memory pressure, the kernel tries to free memory by flushing the client's page cache, but doing the flush means allocating more memory on the server, making the memory pressure worse, until the whole thing just seizes up."

If you're using ceph-fuse to mount cephfs, then you only have resource contention as a problem, but nothing as severe as deadlocking.  Settings like Jake mentioned can help you work around resource contention if that is an issue for you.  Don't change the settings unless you notice a problem, though.  Ceph is pretty good at having sane defaults.

On Wed, Aug 29, 2018 at 6:35 AM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
Hi Marc,

We mount cephfs using FUSE on all 10 nodes of our cluster, and provided
that we limit bluestore memory use, find it to be reliable*.

bluestore_cache_size = 209715200
bluestore_cache_kv_max = 134217728

Without the above tuning, we get OOM errors.

As others will confirm, the FUSE client is more stable than the kernel
client, but slower.

ta ta

Jake

* We have 128GB of ram per 45 x 8TB Drive OSD node, way below
recommendations (1GB RAM per TB storage); our OOM issues are completely
predictable...

On 29/08/18 13:25, Marc Roos wrote:
>
>
> I have 3 node test cluster and I would like to expand this with a 4th
> node that is currently mounting the cephfs and rsync's backups to it. I
> can remember reading something about that you could create a deadlock
> situation doing this.
>
> What are the risks I would be taking if I would be doing this?
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux