Re: hashpspool and backfilling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 20, 2014 at 12:48 PM, Dan van der Ster
<daniel.vanderster@xxxxxxx> wrote:
> Hi,
>
> On Thu, Feb 20, 2014 at 7:47 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>>
>> On Tue, Feb 18, 2014 at 8:21 AM, Dan van der Ster
>> <daniel.vanderster@xxxxxxx> wrote:
>> > Hi,
>> > Today I've noticed an interesting result of not have hashpspool
>> > enabled on a number of pools -- backfilling is delayed.
>> >
>> > Take for example the following case: a PG from each of 5 different
>> > pools (details below) are all mapped to the same three OSDs: 884,
>> > 1186, 122. This is of course bad for data distribution, but I realised
>> > today that it also delays backfilling. In our case we have osd max
>> > backfills = 1, so the first 4 PGs listed below all have to wait for
>> > 32.1a1 to finish before they can start. And in this case, pool 32 has
>> > many objects, with low importance, whereas pools 4 and 5 have high
>> > importance data that I'd like backfilled with priority.
>>
>> I'm not sure if this reasoning is quite right. If you had hashpspool
>> enabled, you would still have sets of OSDs that share PGs across each
>> of the pools, and your max backfill params would still limit how many
>> of them could backfill at a time. They just wouldn't be sequentially
>> numbered.
>>
>
> OK I guess we'll get some experience with that once we have all hashpspool'd
> pools.
>
>>
>> >
>> > Is there a way (implemented or planned) to prioritise the backfilling
>> > of certain pools over others?
>> > If not, is there a way to instruct a given PG to begin backfilling right
>> > away?
>>
>> Sadly no; I don't think we've ever talked about either of these. This
>> sounds like it would be a good feature request in the tracker, or
>> maybe a good blueprint for the upcoming summit if you can flesh out
>> the UI you'd like to see.
>
>
> OK great!
>
>>
>>
>> > And a related question: will
>> >    ceph osd pool set <poolname> hashpspool true
>> > be available in a dumpling release, e.g. 0.67.7? It is not available
>> > in 0.67.5, AFAICT.
>>
>> We discussed it but didn't think there was much point -- if you need to
>> enable hashpspool you can do so by extracting, manually editing, and
>> injecting the crush map. :)
>
>
> How does that work? Our decompiled crush maps don't mention pools:
>
>    ceph osd getcrushmap -o crush.map
>    crushtool -d crush.map -o crush.txt
>
> no pools nor pool flags in crush.txt

*sigh*
You're right; I didn't think that all the way through when somebody
mentioned it to me offline. I believe we'll need to backport the
monitor command and will discuss it again with the team tomorrow or
Monday.
-Greg

>
> Cheers, Dan
>
>
>>
>> -Greg
>>
>> >
>> > Cheers, Dan
>> >
>> > 2.1bf  active+degraded+remapped+wait_backfill  [884,1186,122]
>> > [884,1186,1216]
>> >
>> > 6.1bb  active+degraded+remapped+wait_backfill  [884,1186,122]
>> > [884,1186,1216]
>> >
>> > 4.1bd  active+degraded+remapped+wait_backfill  [884,1186,122,841]
>> > [884,1186,182,1216]
>> >
>> > 5.1bc  active+degraded+remapped+wait_backfill  [884,1186,122,841]
>> > [884,1186,182,1216]
>> >
>> > 32.1a1 active+degraded+remapped+backfilling   [884,1186,122]
>> > [884,1186,1216]
>> >
>> > full details at:
>> >
>> > http://pastebin.com/raw.php?i=LBpx5VsD
>> >
>> > -- Dan van der Ster || Data & Storage Services || CERN IT Department --
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux