Re: CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 18, 2016 at 6:04 PM, Goncalo Borges
<goncalo.borges@xxxxxxxxxxxxx> wrote:
> Dear All...
>
> Our infrastructure is the following:
>
> - We use CEPH/CEPHFS (9.2.0)
> - We have 3 mons and 8 storage servers supporting 8 OSDs each.
> - We use SSDs for journals (2 SSDs per storage server, each serving 4 OSDs).
> - We have one main mds and one standby-replay mds.
> - We are using ceph-fuse client to mount cephfs.
>
> We are on our way to prepare an upgrade to Jewel 10.2.1 since CephFS is
> announced as production and ceph-fuse does has ACL support (which is
> something we do need).
>
> I do have a couple questions regarding the upgrade procedure:
>
> 1) Can we jump directly from 9.2.0 to 10.2.1? Or should we go through all
> the intermediate releases (9.2.0 --> 9.2.1 --> 10.2.0 --> 10.2.1)?

This shouldn't be a problem; if it is the release notes will say so. :)

>
> 2) The upgrade procedure establishes that the upgrade order should be: 1)
> MONS, 2) OSDs, 3) MDS and 4) Clients.
>    2.1) Can I upgrade / restart each MON independently? Or should I shutdown
> all MONs and only restart the services all are in the same version?

Yes, you can restart them independently. Ceph is designed for
zero-downtime upgrades.

>    2.2) I am guessing that it is safe to keep OSDS in server B running
> (under 9.2.0) while we upgrade OSDS in server B to a newer version. Can you
> please confirm?

Yes.

>    2.3) Finally, can I upgrade / restart each MDS independently? If yes, is
> there a particular order (like first the standby-replay one and then the
> main one)? Or should I shutdown all MDS services (making sure that no
> clients are connected) and only restart the services when all are in the
> same version?

Especially since you should only have one active MDS, restarting them
individually shouldn't be an issue. I guess I'd recommend that you
restart the active one last though, just to prevent having to replay
more often than necessary. ;)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux