Re: Prioritise recovery on specific PGs/OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, don't know exactly since which release it was introduced, but in latest jewel and beyond there is:

<SNIP>
Please use pool level options recovery_priority and recovery_op_priority for enabling pool level recovery priority feature:
Raw
# ceph osd pool set default.rgw.buckets.index recovery_priority 5
# ceph osd pool set default.rgw.buckets.index recovery_op_priority 5
Recovery value 5 will help because the default is 3 in jewel release, use below command to check if both options are set properly
</SNIP

r,
Sam


On 20-06-17 15:48, Logan Kuhn wrote:
Is there a way to prioritize specific pools during recovery?  I know there are issues open for it, but I wasn't aware it was implemented yet...

Regards,
Logan

----- On Jun 20, 2017, at 8:20 AM, Sam Wouters <sam@xxxxxxxxx> wrote:
Hi,

Are they all in the same pool? Otherwise you could prioritize pool recovery.
If not, maybe you can play with the osd max backfills number, no idea if it accepts a value of 0 to actually disable it for specific OSDs.

r,
Sam

On 20-06-17 14:44, Richard Hesketh wrote:
Is there a way, either by individual PG or by OSD, I can prioritise backfill/recovery on a set of PGs which are currently particularly important to me?

For context, I am replacing disks in a 5-node Jewel cluster, on a node-by-node basis - mark out the OSDs on a node, wait for them to clear, replace OSDs, bring up and in, mark out the OSDs on the next set, etc. I've done my first node, but the significant CRUSH map changes means most of my data is moving. I only currently care about the PGs on my next set of OSDs to replace - the other remapped PGs I don't care about settling because they're only going to end up moving around again after I do the next set of disks. I do want the PGs specifically on the OSDs I am about to replace to backfill because I don't want to compromise data integrity by downing them while they host active PGs. If I could specifically prioritise the backfill on those PGs/OSDs, I could get on with replacing disks without worrying about causing degraded PGs.

I'm in a situation right now where there is merely a couple of dozen PGs on the disks I want to replace, which are all remapped and waiting to backfill - but there are 2200 other PGs also waiting to backfill because they've moved around too, and it's extremely frustating to be sat waiting to see when the ones I care about will finally be handled so I can get on with replacing those disks.

Rich



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux