Useful MDS configuration for heavily used Cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph 17.2.5:

Hi,

I'm looking for a reasonable and useful MDS configuration for a – in
future, no experiences until now – heavily used CephFS (~100TB).
For example, does it make a difference to increase the
mds_cache_memory_limit or the number of MDS instances?

The hardware does not set any limits, I just want to know where the default
values can be optimized usefully before problem occur.

Thanks,
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux