Re: osd: new pool flags: noscrub, nodeep-scrub

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I like the idea of being able to control on which pool we want to run scrub on.
Some data might deserve better protection than others, it’s really up to the admin.

The default behaviour will be to apply scrub to default pool and every new created pools.
The operator can change this behaviour either with a config flag or directly with a ceph mon command.

> On 11 Sep 2015, at 14:59, Sage Weil <sweil@xxxxxxxxxx> wrote:
> 
> On Fri, 11 Sep 2015, Mykola Golub wrote:
>> On Fri, Sep 11, 2015 at 11:08:29AM +0100, Gregory Farnum wrote:
>>> On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub <mgolub@xxxxxxxxxxxx> wrote:
>>>> Hi,
>>>> 
>>>> I would like to add new pool flags: noscrub and nodeep-scrub, to be
>>>> able to control scrubbing on per pool basis. In our case it could be
>>>> helpful in order to disable scrubbing on cache pools, which does not
>>>> work well right now, but I can imagine other scenarios where it could
>>>> be useful too.
>>> 
>>> Can you talk more about this? It sounds to me like maybe you dislike
>>> the performance impact of scrubbing, but it's fairly important in
>>> terms of data integrity. I don't think we want to permanently disable
>>> them. A corruption in the cache pool isn't any less important than in
>>> the backing pool ? it will eventually get flushed, and it's where all
>>> the reads will be handled!
>> 
>> I was talking about this:
>> 
>> http://tracker.ceph.com/issues/8752
>> 
>> (false-negative on a caching pool). Although the best solution is
>> definitely to fix the bug, I am not sure it will be resolved soon (the
>> bug is open for a year). Still these false-negatives are annoying, as
>> they complicate monitoring for real inconsistent pgs. In this case I
>> might want to disable periodic scrub for caching pools, as a
>> workaround (I could do scrub for them manually though).
>> 
>> This might be not the best example where these flags could be helpful
>> (I just came to the idea when thinking about a workaround for that
>> problem, and this looked useful to me in general). We already have
>> 'ceph osd set no[deep-]scrub', and users use it to temporary resolve
>> high I/O load. Being able to do this per pool looks useful too.
>> 
>> You might have pools of different importance for you, and disabling
>> scrub for some of them might be ok.
> 
> I wonder if, in addition, we should also allow scrub and deep-scrub
> intervals to be set on a per-pool basis?
> 
> sage
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Cheers.
––––
Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: seb@xxxxxxxxxx
Address: 11 bis, rue Roquépine - 75008 Paris

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux