Re: Upgrading ceph and mapped rbds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had several kernel mapped rbds as well as ceph-fuse mounted CephFS clients when I upgraded from Jewel to Luminous. I rolled out the client upgrades over a few weeks after the upgrade. I had tested that the client use cases I had would be fine running Jewel connecting to a Luminous cluster so there weren't any surprised for me when I did it in production.

On Tue, Apr 3, 2018, 11:21 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
> The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor migration from my POV as there is no second storage to migrate to ...



All your pain is self-inflicted.

Just FYI clients are not interrupted when you upgrade ceph. Client will
be interrupted only when update, so if you (suddenly) change crush
tunables, minimum_required_version for example (for this reason clients
must be upgraded before cluster).




k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux