Re: Useful MDS configuration for heavily used Cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are a few details missing to allow people to provide you with advice.

How many files are you expecting to be in this 100TB of capacity? 
This really dictates what you are looking for. It could be full of 4K files which is a very different proposition to it being full of 100M files. 

What sort of media is this file system made up of?
If you have 10’s millions of files on HDD then you are going to be wanting a separate metadata pool for CephFS on some much faster storage.

What is the sort of use case that you are expecting for this storage?
You say it is heavily used but what does that really mean? 
You have a 1000 HPC nodes all trying to access millions of 4K files?
Or are you using it as a more general purpose file system for say home directories?



Darren Soothill

Looking for help with your Ceph cluster? Contact us at https://croit.io/
 
croit GmbH, Freseniusstr. 31h, 81247 Munich 
CEO: Martin Verges - VAT-ID: DE310638492 
Com. register: Amtsgericht Munich HRB 231263 
Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx



> On 15 Jan 2023, at 09:26, E Taka <0etaka0@xxxxxxxxx> wrote:
> 
> Ceph 17.2.5:
> 
> Hi,
> 
> I'm looking for a reasonable and useful MDS configuration for a – in
> future, no experiences until now – heavily used CephFS (~100TB).
> For example, does it make a difference to increase the
> mds_cache_memory_limit or the number of MDS instances?
> 
> The hardware does not set any limits, I just want to know where the default
> values can be optimized usefully before problem occur.
> 
> Thanks,
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux