Re: MDS / CephFS behaviour with unusual directory layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Nathan Fish (lordcirth@xxxxxxxxx):
> MDS CPU load is proportional to metadata ops/second. MDS RAM cache is
> proportional to # of files (including directories) in the working set.
> Metadata pool size is proportional to total # of files, plus
> everything in the RAM cache. I have seen that the metadata pool can
> balloon 8x between being idle, and having every inode open by a
> client.
> The main thing I'd recommend is getting SSD OSDs to dedicate to the
> metadata pools, and SSDs for the HDD OSD's DB/WAL. NVMe if you can. If
> you put that much metadata on only HDDs, it's going to be slow.

Only SSD for OSD data pool and NVMe for metadata pool, so that should be
fine. Besides the initial loading of that many files / directories this
workload shouldn't be any problem.

Thanks for your feedback.

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux