Re: cephfs: status of directory fragmentation and multiple filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 30, 2016 at 7:17 PM, Xiaoxi Chen <superdebuger@xxxxxxxxx> wrote:
> Greg,
>     Would you mind sharing your insight about potential issue of
> multiple FS?  It looks to me like we can separate every component of
> the cluster --- i.e , each FS has its own data pool ,meta pool, and
> even the pools can be mapped to different osds. Seperate MDS nodes so
> they dont  compete for memory .  The only sharing part is Monitor , so
> seems it is simpler than the dir frag and very likely to work ?

I actually expect it does work. John may have more thoughts since he
wrote it, but the code merged pretty late in the Jewel cycle and there
are a few things to think about
1) comparative lack of testing, due to newness of code,
2) possibility that some interfaces may change,
3) lack of multi-fs awareness in many of the fsck systems.

This comes up especially if, for instance, you want to give each
tenant their own FS. Providing them an MDS daemon may be feasible, but
giving each user their own pool probably isn't. We'll be enabling (or
have enabled? I forget) you to put each FS in a different namespace,
rather than separate pools, but none of the fsck tools are prepared to
see multiple inodes of the same number distinguished only by
namespace.
-Greg

>
>      As single MDS can only sustain 1~2 CPU cores and provide less
> than 2000 TPS for most of the operation (tested with mds_log = false ,
> which is the upper bound of performance. op including rename, utime,
> open).  So multi-fs is some kind of *must have* for  one like to
> provide FS service.
>
> Xiaoxi
>
> 2016-06-30 1:53 GMT+08:00 Gregory Farnum <gfarnum@xxxxxxxxxx>:
>> On Wed, Jun 29, 2016 at 10:47 AM, Radoslaw Zarzynski
>> <rzarzynski@xxxxxxxxxxxx> wrote:
>>> Hello,
>>>
>>> I saw the recent question about having more than 1 active
>>> MDS in a cluster. I would like to ask similar ones but in
>>> the matter of 1) directory fragmentation and 2) running
>>> multiple filesystems within same cluster.
>>>
>>> When those features are expected to be production-ready?
>>> What do we need to achieve this status?
>>
>> These are both all about the QA work of just making sure they're
>> well-tested in the nightlies, and demonstrating that the functionality
>> isn't broken. We're expecting dirfrags to be enabled in Kraken; I
>> don't know if there's a target timeline around multi-fs.
>>
>> There are available tickets in the tracker that basically come out to
>> "demonstrate dirfrags actually get exercised in the nightlies", etc if
>> you're interested in contributing!
>> -Greg
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux