Multi-MDS CephFS upgrades limitation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I sent this to the users list yesterday, but it's really is more of a developer question
so I'm reposting it here:

One of the main limitations of using CephFS is the requirement to reduce the
number of active MDS daemons to one during upgrades.  As far as I can tell this
has been a known problem since Luminous (~2017).  This issue essentially
requires downtime during upgrades for any CephFS cluster that needs more than
one active MDS at all times.  I saw there were some improvements to the upgrade
process with 16.2.6 (you no longer have to stop the standby MDSes), but it has
me wondering if there are any plans to fix this limitation soon?

Thanks,
Bryan
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux