Re: would people mind a slow osd restart during luminous upgrade?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

Is the update running in parallel for all OSDs being restarted? 

Because 5 min per server is different than 150 min when there are 30 OSDs there..

Thank you,
George 

> On Feb 8, 2017, at 22:09, Sage Weil <sweil@xxxxxxxxxx> wrote:
> 
> Hello, ceph operators...
> 
> Several times in the past we've had to do some ondisk format conversion 
> during upgrade which mean that the first time the ceph-osd daemon started 
> after upgrade it had to spend a few minutes fixing up it's ondisk files.  
> We haven't had to recently, though, and generally try to avoid such 
> things.
> 
> However, there's a change we'd like to make in FileStore for luminous (*) 
> and it would save us a lot of time and complexity if it was a one-shot 
> update during the upgrade.  I would probably take in the neighborhood of 
> 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon 
> during the upgrade the OSD would stay down for that period (vs the usual 
> <1 restart time).
> 
> Does this concern anyone?  It probably means the upgrades will take longer 
> if you're going host by host since the time per host will go up.
> 
> sage
> 
> 
> * eliminate 'snapdir' objects, replacing them with a head object + 
> whiteout.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux