Hi, Today I've noticed an interesting result of not have hashpspool enabled on a number of pools -- backfilling is delayed. Take for example the following case: a PG from each of 5 different pools (details below) are all mapped to the same three OSDs: 884, 1186, 122. This is of course bad for data distribution, but I realised today that it also delays backfilling. In our case we have osd max backfills = 1, so the first 4 PGs listed below all have to wait for 32.1a1 to finish before they can start. And in this case, pool 32 has many objects, with low importance, whereas pools 4 and 5 have high importance data that I'd like backfilled with priority. Is there a way (implemented or planned) to prioritise the backfilling of certain pools over others? If not, is there a way to instruct a given PG to begin backfilling right away? And a related question: will ceph osd pool set <poolname> hashpspool true be available in a dumpling release, e.g. 0.67.7? It is not available in 0.67.5, AFAICT. Cheers, Dan 2.1bf active+degraded+remapped+wait_backfill [884,1186,122] [884,1186,1216] 6.1bb active+degraded+remapped+wait_backfill [884,1186,122] [884,1186,1216] 4.1bd active+degraded+remapped+wait_backfill [884,1186,122,841] [884,1186,182,1216] 5.1bc active+degraded+remapped+wait_backfill [884,1186,122,841] [884,1186,182,1216] 32.1a1 active+degraded+remapped+backfilling [884,1186,122] [884,1186,1216] full details at: http://pastebin.com/raw.php?i=LBpx5VsD -- Dan van der Ster || Data & Storage Services || CERN IT Department -- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com