Re: decrease pg number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can't migrate RBD objects via cppool right now as it doesn't handle snapshots at all.
I think a few people have done it successfully by setting up existing pools as cache tiers on top of the target pool and then flushing them out, but I've not run through that.

You can also just set the PG warnings threshold of your system is working fine. I believe this is discussed in the release notes.
-Greg
On Wed, Apr 22, 2015 at 3:36 PM Francois Lafont <flafdivers@xxxxxxx> wrote:
Hi,

Pavel V. Kaygorodov wrote:

> I have updated my cluster to Hammer and got a warning "too many PGs
> per OSD (2240 > max 300)". I know, that there is no way to decrease
> number of page groups, so I want to re-create my pools with less pg
> number, move all my data to them, delete old pools and rename new
> pools as the old ones. Also I want to preserve the user rights on new
> pools. I have several pools with RBD images, some of them with
> snapshots.
>
> Which is the proper way to do this?

I'm not a ceph expert and I can just tell you my (little but happy) experience. ;)
I had the same problem with pools of my radosgw pools ie:

- the .rgw.* pools except ".rgw.buckets", and
- the .users.* pools

So, **warning**, it was for very tiny pools. The version of Ceph
was Hammer 94.1, nodes were Ubuntu 14.04 with 3.16 kernel. These commands
worked well for me:

---------------------------------------------
# /!\ Before I have stopped my radosgws (ie the ceph clients of the pools).

old_pool=foo
new_pool=foo.new

ceph osd pool create $new_pool 64
rados cppool $old_pool $new_pool
ceph osd pool delete $old_pool $old_pool --yes-i-really-really-mean-it
ceph osd pool rename $new_pool $old_pool

# And I have restarted my radosgws.
---------------------------------------------

That's all. In my case, it was very fast because the pools didn't contain
very much data.

And I prolong your question: is it possible to do the same process
but with a pool of the cephfs? For instance, the pool metadata?

If I try the commands above, I have an error with the delete
command:

~# ceph osd pool delete metadata metadata --yes-i-really-really-mean-it
Error EBUSY: pool 'metadata' is in use by CephFS

However, I'm sure no client use the cephfs (it's a cluster for test).

--
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux