Is there any way to move existing non-sharded bucket index to sharded one? Or is there any way (online or offline) to move all objects from non-sharded bucket to sharded one? 2016-06-13 11:38 GMT+03:00 Sean Redmond <sean.redmond1@xxxxxxxxx>: > Hi, > > I have a few buckets here with >10M objects in them and the index pool was a > problem when scrubbing, I have set IO priority and set the disk scheduler to > CFQ, I don't really see these problems any more, I would never really be > happy disabling scrubs. > > I do still see the problems when backfill takes place but thankfully this is > pretty rare on the SSD storage. > > I found that using index shards also helps with very large buckets. > > Thanks > > On Mon, Jun 13, 2016 at 1:13 AM, Василий Ангапов <angapov@xxxxxxxxx> wrote: >> >> Thanks, Sean! >> >> BTW, is it a good idea to turn off scrub and deep-scrub on bucket.index >> pool? >> We have something like 5 million objects in it and when it is >> scrubbing RGW just stops working until it's finished... >> >> Or will setting the "idle" IO priority for scrub help? >> >> >> >> 2016-06-12 16:07 GMT+03:00 Sean Redmond <sean.redmond1@xxxxxxxxx>: >> > Hi Vasily, >> > >> > You don't need to create a new pool and move the data to a new pool, you >> > can >> > just update the crush map rule set to tell the existing RGW index pool >> > to >> > use a different 'root'. >> > >> > (http://docs.ceph.com/docs/master/rados/operations/crush-map/#crushmaprules) >> > >> > This change can be done online, but I would advise you do it at a quite >> > time >> > and set sensible levels of back fill and recovery as it will result in >> > the >> > movement of data, >> > >> > Thanks >> > >> > On Sun, Jun 12, 2016 at 1:43 PM, Василий Ангапов <angapov@xxxxxxxxx> >> > wrote: >> >> >> >> Hello! >> >> >> >> I did not find any information on how to move existing RGW bucket >> >> index pool to new one. >> >> I want to move my bucket indices on SSD disks, do I have to shut down >> >> the whole RGW or not? Would be very grateful for any tip. >> >> >> >> Regards, Vasily. >> >> _______________________________________________ >> >> ceph-users mailing list >> >> ceph-users@xxxxxxxxxxxxxx >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > >> > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com