Hi Jean
You would probably need this
ceph osd pool create glance-images-bkp 128 128 rados cppool glance-images glance-images-bkp ceph osd pool rename glance-images glance-images-old ceph osd pool rename glance-images-bkp glance-images ceph osd pool delete glance-images-old glance-images-old --yes-i-really-really-mean-it ( once you are sure data is moved 100% )
I would suggest to stop openstack services that are using the original pool , then copy the data , rename pools , finally start openstack services and check everything is there.
I have done this once with success.
**************************************************************** Karan Singh Systems Specialist , Storage Platforms CSC - IT Center for Science, Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland mobile: +358 503 812758 tel. +358 9 4572001 fax +358 9 4572302 http://www.csc.fi/****************************************************************
On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen < stefws@xxxxxx> wrote:
On 26/03/2015, at 21.07, J-P Methot <jpmethot@xxxxxxxxxx> wrote:
That's a great idea. I know I can setup cinder (the openstack volume manager) as a multi-backend manager and migrate from one backend to the other, each backend linking to different pools of the same ceph cluster. What bugs me though is that I'm pretty sure the image store, glance, wouldn't let me do that. Additionally, since the compute component also has its own ceph pool, I'm pretty sure it won't let me migrate the data through openstack.
Hm wouldn’t it be possible to do something similar ala:
# list object from src pool rados ls objects loop | filter-obj-id | while read obj; do # export $obj to local disk rados -p pool-wth-too-many-pgs get $obj # import $obj from local disk to new pool rados -p better-sized-pool put $obj done
You would also have issues with snapshots if you do this on an RBD pool. That's unfortunately not feasible. -Greg possible split/partition list of objects into multiple concurrent loops, possible from multiple boxes as seems fit for resources at hand, cpu, memory, network, ceph perf.
/Steffen
On 3/26/2015 3:54 PM, Steffen W Sørensen wrote:
On 26/03/2015, at 20.38, J-P Methot <jpmethot@xxxxxxxxxx> wrote:
Lately I've been going back to work on one of my first ceph setup and now I see that I have created way too many placement groups for the pools on that setup (about 10 000 too many). I believe this may impact performances negatively, as the performances on this ceph cluster are abysmal. Since it is not possible to reduce the number of PGs in a pool, I was thinking of creating new pools with a smaller number of PGs, moving the data from the old pools to the new pools and then deleting the old pools.
I haven't seen any command to copy objects from one pool to another. Would that be possible? I'm using ceph for block storage with openstack, so surely there must be a way to move block devices from a pool to another, right?
What I did a one point was going one layer higher in my storage abstraction, and created new Ceph pools and used those for new storage resources/pool in my VM env. (ProxMox) on top of Ceph RBD and then did a live migration of virtual disks there, assume you could do the same in OpenStack.
My 0.02$
/Steffen
-- ====================== Jean-Philippe Méthot Administrateur système / System administrator GloboTech Communications Phone: 1-514-907-0050 Toll Free: 1-(888)-GTCOMM1 Fax: 1-(514)-907-0750 jpmethot@xxxxxxxxxx http://www.gtcomm.net
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|