cephfs mount on osd node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Marc,

We mount cephfs using FUSE on all 10 nodes of our cluster, and provided
that we limit bluestore memory use, find it to be reliable*.

bluestore_cache_size = 209715200
bluestore_cache_kv_max = 134217728

Without the above tuning, we get OOM errors.

As others will confirm, the FUSE client is more stable than the kernel
client, but slower.

ta ta

Jake

* We have 128GB of ram per 45 x 8TB Drive OSD node, way below
recommendations (1GB RAM per TB storage); our OOM issues are completely
predictable...

On 29/08/18 13:25, Marc Roos wrote:
> 
> 
> I have 3 node test cluster and I would like to expand this with a 4th 
> node that is currently mounting the cephfs and rsync's backups to it. I 
> can remember reading something about that you could create a deadlock 
> situation doing this. 
> 
> What are the risks I would be taking if I would be doing this?
> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux