Re: Status of cephfs / multiple mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 21, 2016 at 10:00 PM, Michael Wyraz <michael@xxxxxxxx> wrote:
> Hello,
>
> I'm curious what is the current state of the stability of cephfs when
> using multiple mds and/or snapshots. It seems that the "Early Adaptors"
> documentation has not changed since hammer, about 1 year ago. We have 2
> major releases and many releases meanwhile. SO probably things have
> changed and there is more experience. Is it save now to run cephfs with
> more than 1 active mds?

It hasn't materially changed since that documentation was created.
We've been focusing on repair tools and single-MDS stability since
that time. We're working on doing some comprehensive planning for the
future, and it looks like multi-MDS is going to be our next big
thing...but we're not done yet and that could change. In the
meanwhile, we've actually locked multi-mds down much more so you can't
turn it on by mistake, or hide that you've done so in debugging. :P

If it *is* our next big thing, we're hoping to be ready for Luminous
beginning-mid next year, but we really don't know if that's realistic
or not.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux