Hi Martin, We had sortbitwise set on other jewel clusters well before 10.2.9 was out. 10.2.8 added the warning if it is not set, but the flag should be safe in 10.2.6. -- Dan On Tue, Jul 18, 2017 at 11:43 AM, Martin Palma <martin@xxxxxxxx> wrote: > Can the "sortbitwise" also be set if we have a cluster running OSDs on > 10.2.6 and some OSDs on 10.2.9? Or should we wait that all OSDs are on > 10.2.9? > > Monitor nodes are already on 10.2.9. > > Best, > Martin > > On Fri, Jul 14, 2017 at 1:16 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: >> On Mon, Jul 10, 2017 at 5:06 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote: >>> On Mon, 10 Jul 2017, Luis Periquito wrote: >>>> Hi Dan, >>>> >>>> I've enabled it in a couple of big-ish clusters and had the same >>>> experience - a few seconds disruption caused by a peering process >>>> being triggered, like any other crushmap update does. Can't remember >>>> if it triggered data movement, but I have a feeling it did... >>> >>> That's consistent with what one should expect. >>> >>> The flag triggers a new peering interval, which means the PGs will peer, >>> but there is no change in the mapping or data layout or anything else. >>> The only thing that is potentially scary here is that *every* PG will >>> repeer at the same time. >> >> Thanks Sage & Luis. I confirm that setting sortbitwise on a large >> cluster is basically a non-event... nothing to worry about. >> >> (Btw, we just upgraded our biggest prod clusters to jewel -- that also >> went totally smooth!) >> >> -- Dan >> >>> sage >>> >>> >>>> >>>> >>>> >>>> On Mon, Jul 10, 2017 at 3:17 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: >>>> > Hi all, >>>> > >>>> > With 10.2.8, ceph will now warn if you didn't yet set sortbitwise. >>>> > >>>> > I just updated a test cluster, saw that warning, then did the necessary >>>> > ceph osd set sortbitwise >>>> > >>>> > I noticed a short re-peering which took around 10s on this small >>>> > cluster with very little data. >>>> > >>>> > Has anyone done this already on a large cluster with lots of objects? >>>> > It would be nice to hear that it isn't disruptive before running it on >>>> > our big production instances. >>>> > >>>> > Cheers, Dan >>>> > _______________________________________________ >>>> > ceph-users mailing list >>>> > ceph-users@xxxxxxxxxxxxxx >>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@xxxxxxxxxxxxxx >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> >>>> >>> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com