Hi Patrick, Thanks for your reply. Still have some questions about the multi-fs. 1. In current design, system cannot create more than 1 fs in a pool. In Feature #15066(http://tracker.ceph.com/issues/15066) will handle this issue. Filer::write() will identify the RADOS object by poolID+OID. If we want to support multi-fs in a pool, we need to add prefix tag on OID, such as fs1:100000000 .PoolID still comes from layout information, which comes from mdsmap. Is it correct? 2. If the Feature #15066 resolved, can sinlge active MDS support multiple FS? In my opinion, an active MDS has a MDSRank object, which has a MDLog. We need to change that MDSRank object has multi MDLog. One MDLog correspond one fs. It seems no issue tracking this problem. Thanks, Marvin Zhang On Thu, Jan 17, 2019 at 6:17 AM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote: > > On Wed, Jan 16, 2019 at 1:21 AM Marvin Zhang <fanzier@xxxxxxxxx> wrote: > > Hi CephFS experts, > > From document, I know multi-fs within a cluster is still experiment feature. > > 1. Is there any estimation about stability and performance for this feature? > > Remaining blockers [1] need completed. No developer has yet taken on > this task. Perhaps by O release. > > > 2. It seems that each FS will consume at least 1 active MDS and > > different FS can't share MDS. Suppose I want to create 10 FS , I need > > at least 10 MDS. Is it right? Is ther any limit number for MDS within > > a cluster? > > No limit on number of MDS but there is a limit on the number of > actives (multimds). In the not-to-distant future, container > orchestration platforms (e.g. Rook) underneath Ceph would provide a > way to dynamically spin up new MDSs in response to the creation of a > file system. > > [1] http://tracker.ceph.com/issues/22477 > > -- > Patrick Donnelly