On Thu, 9 Feb 2017, George Mihaiescu wrote: > Hi Sage, > > Is the update running in parallel for all OSDs being restarted? > > Because 5 min per server is different than 150 min when there are 30 > OSDs there.. In parallel. sage > > Thank you, > George > > > On Feb 8, 2017, at 22:09, Sage Weil <sweil@xxxxxxxxxx> wrote: > > > > Hello, ceph operators... > > > > Several times in the past we've had to do some ondisk format conversion > > during upgrade which mean that the first time the ceph-osd daemon started > > after upgrade it had to spend a few minutes fixing up it's ondisk files. > > We haven't had to recently, though, and generally try to avoid such > > things. > > > > However, there's a change we'd like to make in FileStore for luminous (*) > > and it would save us a lot of time and complexity if it was a one-shot > > update during the upgrade. I would probably take in the neighborhood of > > 1-5 minutes for a 4-6TB HDD. That means that when restarting the daemon > > during the upgrade the OSD would stay down for that period (vs the usual > > <1 restart time). > > > > Does this concern anyone? It probably means the upgrades will take longer > > if you're going host by host since the time per host will go up. > > > > sage > > > > > > * eliminate 'snapdir' objects, replacing them with a head object + > > whiteout. > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com