Hi,
Is it advisable to limit the sizes of data pools or metadata pools
of a cephfs filesystem for performance or other reasons?
I assume you don't mean quotas for pools, right? The pool size is
limited by the number and size of the OSDs, of course. I can't really
say what's advisable or not, it really depends on the actual
requirements. Ceph scales pretty well. Performance can be
significantly improved if you use pinning with multiple active MDS
servers.
We are migrating to cephfs and I estimate that we will eventually
end up with 10-15PB of data and ~1.5TB of metadata. Should I divide
the data among multiple data pools? Perhaps even create multiple
cephfs filesystems?
This also depends on your actual requirements. For example, if you
have lots of "cold" data you could archive it on erasure-coded pools
on HDDs while "hot" data is on SSDs and maybe replicated. Multiple
filesystems is an option, but if possible you should stick to one
filesystem with pinning, here's an excerpt from the docs [1]:
Multiple file systems do not share pools. This is particularly
important for snapshots but also because no measures are in place to
prevent duplicate inodes. The Ceph commands prevent this dangerous
configuration.
Each file system has its own set of MDS ranks. Consequently, each
new file system requires more MDS daemons to operate and increases
operational costs. This can be useful for increasing metadata
throughput by application or user base but also adds cost to the
creation of a file system. Generally, a single file system with
subtree pinning is a better choice for isolating load between
applications.
Regards,
Eugen
[1] https://docs.ceph.com/en/latest/cephfs/multifs/
Zitat von Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>:
Hello
Is it advisable to limit the sizes of data pools or metadata pools
of a cephfs filesystem for performance or other reasons?
We are migrating to cephfs and I estimate that we will eventually
end up with 10-15PB of data and ~1.5TB of metadata. Should I divide
the data among multiple data pools? Perhaps even create multiple
cephfs filesystems?
Thanks,
Vlad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx