Re: MDS / CephFS behaviour with unusual directory layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



MDS CPU load is proportional to metadata ops/second. MDS RAM cache is
proportional to # of files (including directories) in the working set.
Metadata pool size is proportional to total # of files, plus
everything in the RAM cache. I have seen that the metadata pool can
balloon 8x between being idle, and having every inode open by a
client.
The main thing I'd recommend is getting SSD OSDs to dedicate to the
metadata pools, and SSDs for the HDD OSD's DB/WAL. NVMe if you can. If
you put that much metadata on only HDDs, it's going to be slow.



On Fri, Jul 26, 2019 at 5:11 AM Stefan Kooman <stefan@xxxxxx> wrote:
>
> Hi List,
>
> We are planning to move a filesystem workload (currently nfs) to CephFS.
> It's around 29 TB. The unusual thing here is the amount of directories
> in use to host the files. In order to combat a "too many files in one
> directory" scenario a "let's make use of recursive directories" approach.
> Not ideal either. This workload is supposed to be moved to (Ceph) S3
> sometime in the future, but until then, it has to go to a shared
> filesystem ...
>
> So what is unusual about this? The directory layout looks like this
>
> /data/files/00/00/[0-8][0-9]/[0-9]/ from this point on there will be 7
> directories created to store 1 file.
>
> Total amount of directories in a file path is 14. There are around 150 M
> files in 400 M directories.
>
> The working set won't be big. Most files will just sit around and will
> not be touched. The active amount of files wil be a few thousand.
>
> We are wondering if this kind of directory structure is suitable for
> CephFS. Might the MDS get difficulties with keeping up that many inodes
> / dentries or doesn't it care at all?
>
> The amount of metadata overhead might be horrible, but we will test that
> out.
>
> Thanks,
>
> Stefan
>
>
> --
> | BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux