Re: Cephfs scalability question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/19/22 22:57, Vladimir Brik wrote:
Hello

Is it advisable to limit the sizes of data pools or metadata pools of a cephfs filesystem for performance or other reasons?

I would say no. With Ceph the more OSDs / storage nodes you have, the more PGs you can provision, the better the performance and durability. And might be easier to balance (PGs are smaller, so easier for the balancer to balance with regards to disk utilization).


We are migrating to cephfs and I estimate that we will eventually end up with 10-15PB of data and ~1.5TB of metadata. Should I divide the data among multiple data pools? Perhaps even create multiple cephfs filesystems?

What type of disks are you planning to use for data, and what for metadata pools? Do the data pool OSDs get separate WAL/DB?

When there is a lot of metadata involved, I would recommend to put this on flash storage. Try to spread this among as many OSDs as you can.

How many clients will you have, and what kind of workload? Is stability of highest importance, or scalability? I'm asking those as "multiple filesystems" has been declared stable since Pacific. Only use multiple active MDS servers when you need them. Note that there have been *a lot* of improvements in that area (multiple active MDS, balancing, snapshot support with multiple active MDS), but when issues are found (at least that is my impression) they have either to do with: snapshots and / or multiple active MDS. I cannot recall to have seen multiple fs issues on this list, but that *might* also be because it's not (yet) used a lot. Can also be it is rock solid (I really hope so). I do not know if snapshots with multiple active MDS _and_ multiple filesystems is already supported. I do not want to scare you away from any of the newish features, just think about if you need them.

There are multiple users on this list that have huge clusters and have plenty of active MDS servers so it can definitely work and scale.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux