Hi Simon, If everything is in the same Ceph cluster and you want to move the whole “.rgw.buckets” (I assume your RBD traffic is targeted into a “data” or “rbd” pool) to your cold storage OSD maybe
you could edit the CRUSH map, then it’s just a matter of rebalancing. You can check the ssd/platter example in the doc:
http://docs.ceph.com/docs/master/rados/operations/crush-map/ or this article detailing different maps:
http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map
Cheers, Maxime From:
ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Simon Murray <simon.murray@xxxxxxxxxxxxxxxxx> Morning guys, I've got about 8 million objects sat in .rgw.buckets that wants moving out of the way of OpenStack RDB traffic onto its own (admittedly small) cold storage pool on separate OSDs. I attempted to do this over the weekend during a 12h scheduled downtime, however my estimates had this pool completing in a rather un-customer friendly (think no backups...) 7 days. Anyone had any experience in doing this quicker? Any obvious reasons why I can't hack do_copy_pool() to spawn a bunch of threads and bang this off in a few hours? Cheers Si
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com