How many PGs do you have - and how many are you increasing it to?
Increasing PG counts can be disruptive if you are increasing by a large proportion of the initial count because all the PG peering involved. If you are doubling the amount of PGs it might be good to do it in stages to minimize peering. For example if
you are going from 1024 to 2048 - consider 4 increases of 256, allowing the cluster to stabilize in-between, rather that one event that doubles the number of PGs.
If you expect this cluster to grow, overshoot the recommended PG count by 50% or so. This will allow you to minimize the PG increase events, and thusly impact to your users.
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Matteo Dacrema <mdacrema@xxxxxxxx>
Date: Sunday, September 18, 2016 at 3:29 PM To: Goncalo Borges <goncalo.borges@xxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx> Subject: [EXTERNAL] Re: [ceph-users] Increase PG number
Hi , thanks for your reply.
Yes, I’don’t any near full osd.
The problem is not the rebalancing process but the process of creation of new pgs.
I’ve only 2 host running Ceph Firefly version with 3 SSDs for journaling each.
During the creation of new pgs all the volumes attached stop to read or write showing high iowait.
Ceph -s tell me that there are thousand of slow requests.
When all the pgs are created slow request begin to decrease and the cluster start rebalancing process.
Matteo
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email
in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender
immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com