Thanks Maxime
Hi Simon,
If everything is in the same Ceph cluster and you want to move the whole “.rgw.buckets” (I assume your RBD traffic is targeted into a “data” or “rbd” pool) to your cold storage OSD maybe you could edit the CRUSH map, then it’s just a matter of rebalancing.
You can check the ssd/platter example in the doc: http://docs.ceph.com/docs/
master/rados/operations/crush- or this article detailing different maps: http://cephnotes.ksperis.com/map/ blog/2015/02/02/crushmap- example-of-a-hierarchical- cluster-map
Cheers,
Maxime
From: ceph-users <ceph-users-bounces@lists.
ceph.com > on behalf of Simon Murray <simon.murray@xxxxxxxxxxxxxx.uk >
Date: Tuesday 16 August 2016 12:25
To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: rados cppool slooooooowness
Morning guys,
I've got about 8 million objects sat in .rgw.buckets that wants moving out of the way of OpenStack RDB traffic onto its own (admittedly small) cold storage pool on separate OSDs.
I attempted to do this over the weekend during a 12h scheduled downtime, however my estimates had this pool completing in a rather un-customer friendly (think no backups...) 7 days.
Anyone had any experience in doing this quicker? Any obvious reasons why I can't hack do_copy_pool() to spawn a bunch of threads and bang this off in a few hours?
Cheers
Si
DataCentred Limited registered in England and Wales no. 05611763
DataCentred Limited registered in England and Wales no. 05611763
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com