Re: CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Greg...

There is one further thing which is not explained in the release notes and that may be worthwhile to say.

The rpm structure (for redhat compatible releases) changed in Jewel where now there is a ( ceph + ceph-common + ceph-base + ceph-mon/osd/mds + others ) packages while in infernalis there was only  ( ceph + ceph-common + others ) packages

I haven't tested things yet myself but the standard upgrade instructions just say to do a 'yum update && yum install ceph' and I actually wonder how this will pull ceph-mon in a MON, ceoh-osd in an OSD server or ceph-mds in a MDS. Unless everything is pulled together in each service (even if not used afterwards).

Cheers

G.





    * INFERNALIS:
    ceph-9.2.1-0.el7.x86_64.rpm
    ceph-common-9.2.1-0.el7.x86_64.rpm
    ceph-debuginfo-9.2.1-0.el7.x86_64.rpm
    ceph-devel-compat-9.2.1-0.el7.x86_64.rpm
    cephfs-java-9.2.1-0.el7.x86_64.rpm
    ceph-fuse-9.2.1-0.el7.x86_64.rpm
    ceph-libs-compat-9.2.1-0.el7.x86_64.rpm
    ceph-radosgw-9.2.1-0.el7.x86_64.rpm
    ceph-selinux-9.2.1-0.el7.x86_64.rpm
    ceph-test-9.2.1-0.el7.x86_64.rpm
    libcephfs1-9.2.1-0.el7.x86_64.rpm
    libcephfs1-devel-9.2.1-0.el7.x86_64.rpm
    libcephfs_jni1-9.2.1-0.el7.x86_64.rpm
    libcephfs_jni1-devel-9.2.1-0.el7.x86_64.rpm
    librados2-9.2.1-0.el7.x86_64.rpm
    librados2-devel-9.2.1-0.el7.x86_64.rpm
    libradosstriper1-9.2.1-0.el7.x86_64.rpm
    libradosstriper1-devel-9.2.1-0.el7.x86_64.rpm
    librbd1-9.2.1-0.el7.x86_64.rpm
    librbd1-devel-9.2.1-0.el7.x86_64.rpm
    python-ceph-compat-9.2.1-0.el7.x86_64.rpm
    python-cephfs-9.2.1-0.el7.x86_64.rpm
    python-rados-9.2.1-0.el7.x86_64.rpm
    python-rbd-9.2.1-0.el7.x86_64.rpm
    rbd-fuse-9.2.1-0.el7.x86_64.rpm

    * JEWEL:
    ceph-10.2.1-0.el7.x86_64.rpm
    ceph-base-10.2.1-0.el7.x86_64.rpm
    ceph-common-10.2.1-0.el7.x86_64.rpm
    ceph-debuginfo-10.2.1-0.el7.x86_64.rpm
    ceph-devel-compat-10.2.1-0.el7.x86_64.rpm
    cephfs-java-10.2.1-0.el7.x86_64.rpm
    ceph-fuse-10.2.1-0.el7.x86_64.rpm
    ceph-libs-compat-10.2.1-0.el7.x86_64.rpm
    ceph-mds-10.2.1-0.el7.x86_64.rpm
    ceph-mon-10.2.1-0.el7.x86_64.rpm
    ceph-osd-10.2.1-0.el7.x86_64.rpm
    ceph-radosgw-10.2.1-0.el7.x86_64.rpm
    ceph-selinux-10.2.1-0.el7.x86_64.rpm
    ceph-test-10.2.1-0.el7.x86_64.rpm
    libcephfs1-10.2.1-0.el7.x86_64.rpm
    libcephfs1-devel-10.2.1-0.el7.x86_64.rpm
    libcephfs_jni1-10.2.1-0.el7.x86_64.rpm
    libcephfs_jni1-devel-10.2.1-0.el7.x86_64.rpm
    librados2-10.2.1-0.el7.x86_64.rpm
    librados2-devel-10.2.1-0.el7.x86_64.rpm
    libradosstriper1-10.2.1-0.el7.x86_64.rpm
    libradosstriper1-devel-10.2.1-0.el7.x86_64.rpm
    librbd1-10.2.1-0.el7.x86_64.rpm
    librbd1-devel-10.2.1-0.el7.x86_64.rpm
    librgw2-10.2.1-0.el7.x86_64.rpm
    librgw2-devel-10.2.1-0.el7.x86_64.rpm
    python-ceph-compat-10.2.1-0.el7.x86_64.rpm
    python-cephfs-10.2.1-0.el7.x86_64.rpm
    python-rados-10.2.1-0.el7.x86_64.rpm
    python-rbd-10.2.1-0.el7.x86_64.rpm
    rbd-fuse-10.2.1-0.el7.x86_64.rpm
    rbd-mirror-10.2.1-0.el7.x86_64.rpm
    rbd-nbd-10.2.1-0.el7.x86_64.rpm



On 05/25/2016 07:45 AM, Gregory Farnum wrote:
On Wed, May 18, 2016 at 6:04 PM, Goncalo Borges
<goncalo.borges@xxxxxxxxxxxxx> wrote:
Dear All...

Our infrastructure is the following:

- We use CEPH/CEPHFS (9.2.0)
- We have 3 mons and 8 storage servers supporting 8 OSDs each.
- We use SSDs for journals (2 SSDs per storage server, each serving 4 OSDs).
- We have one main mds and one standby-replay mds.
- We are using ceph-fuse client to mount cephfs.

We are on our way to prepare an upgrade to Jewel 10.2.1 since CephFS is
announced as production and ceph-fuse does has ACL support (which is
something we do need).

I do have a couple questions regarding the upgrade procedure:

1) Can we jump directly from 9.2.0 to 10.2.1? Or should we go through all
the intermediate releases (9.2.0 --> 9.2.1 --> 10.2.0 --> 10.2.1)?
This shouldn't be a problem; if it is the release notes will say so. :)

2) The upgrade procedure establishes that the upgrade order should be: 1)
MONS, 2) OSDs, 3) MDS and 4) Clients.
   2.1) Can I upgrade / restart each MON independently? Or should I shutdown
all MONs and only restart the services all are in the same version?
Yes, you can restart them independently. Ceph is designed for
zero-downtime upgrades.

   2.2) I am guessing that it is safe to keep OSDS in server B running
(under 9.2.0) while we upgrade OSDS in server B to a newer version. Can you
please confirm?
Yes.

   2.3) Finally, can I upgrade / restart each MDS independently? If yes, is
there a particular order (like first the standby-replay one and then the
main one)? Or should I shutdown all MDS services (making sure that no
clients are connected) and only restart the services when all are in the
same version?
Especially since you should only have one active MDS, restarting them
individually shouldn't be an issue. I guess I'd recommend that you
restart the active one last though, just to prevent having to replay
more often than necessary. ;)
-Greg

-- 
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux