Re: would people mind a slow osd restart during luminous upgrade?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/09/2017 04:19 AM, David Turner wrote:
The only issue I can think of is if there isn't a version of the clients
fully tested to work with a partially upgraded cluster or a documented
incompatibility requiring downtime. We've had upgrades where we had to
upgrade clients first and others that we had to do the clients last due
to issues with how the clients interacted with an older cluster,
partially upgraded cluster, or newer cluster.

If the FileStore is changing this much, I can imagine a Jewel client
having a hard time locating the objects it needs from a Luminous cluster.

AFAIU, this would be on the osd side and completely transparent to clients.

This has to do with how the osds keep track of object snapshots (in the event of head being deleted?), and clients themselves should have nothing to worry about.

  -Joao

On Wed, Feb 8, 2017 at 8:09 PM Sage Weil <sweil@xxxxxxxxxx
<mailto:sweil@xxxxxxxxxx>> wrote:

    Hello, ceph operators...

    Several times in the past we've had to do some ondisk format conversion
    during upgrade which mean that the first time the ceph-osd daemon
    started
    after upgrade it had to spend a few minutes fixing up it's ondisk files.
    We haven't had to recently, though, and generally try to avoid such
    things.

    However, there's a change we'd like to make in FileStore for
    luminous (*)
    and it would save us a lot of time and complexity if it was a one-shot
    update during the upgrade.  I would probably take in the neighborhood of
    1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon
    during the upgrade the OSD would stay down for that period (vs the usual
    <1 restart time).

    Does this concern anyone?  It probably means the upgrades will take
    longer
    if you're going host by host since the time per host will go up.

    sage


    * eliminate 'snapdir' objects, replacing them with a head object +
    whiteout.
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux