Re: Mysterious cache-tier flushing behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, space available drops, not space consumed. Not sure then; caching
has changed a bunch since I worked with it.

On Fri, Jun 17, 2016 at 9:49 AM, Christian Balzer <chibi@xxxxxxx> wrote:
>
>
> Hello Greg,
>
> The opposite, space is consumed:
>
> http://i.imgur.com/ALBR5dj.png
>
> I can assure you, in that cluster objects don't get deleted.
>
> Christian
>
> On Fri, 17 Jun 2016 08:57:31 -0700 Gregory Farnum wrote:
>
>> Sounds like you've got deleted objects in the cache tier getting flushed
>> (i.e., deleted) in the base tier.
>> -Greg
>>
>> On Thursday, June 16, 2016, Christian Balzer <chibi@xxxxxxx> wrote:
>>
>> >
>> > Hello devs and other sage(sic) people,
>> >
>> > Ceph 0.94.5, cache tier in writeback mode.
>> >
>> > As mentioned before, I'm running a cron job every day at 23:40 dropping
>> > the flush dirty target by 4% (0.60 to 0.56) and then re-setting it to
>> > the previous value 10 minutes later.
>> > The idea is to have all the flushing done during off-peak hours and
>> > that works beautifully.
>> > No flushes during day time, only lightweight evicts.
>> >
>> > Now I'm graphing all kinds of Ceph and system related info with
>> > graphite and noticed something odd.
>> >
>> > When the flushes are initiated, the HDD space of the OSDs in the
>> > backing store drops by a few GB, pretty much the amount of dirty
>> > objects over the threshold accumulated during a day, so no surprise
>> > there. This happens every time when that cron job runs.
>> >
>> > However only on some days this drop (more pronounced on those days) is
>> > accompanied by actual:
>> > a) flushes according to the respective Ceph counters
>> > b) network traffic from the cache-tier to the backing OSDs
>> > c) HDD OSD writes (both from OSD perspective and actual HDD)
>> > d) cache pool SSD reads (both from OSD perspective and actual SSD)
>> >
>> > So what is happening on the other days?
>> >
>> > The space clearly is gone and triggered by the "flush", but no data was
>> > actually transfered to the HDD OSD nodes, nor was there anything
>> > (newly) written.
>> >
>> > Dazed and confused,
>> >
>> > Christian
>> > --
>> > Christian Balzer        Network/Systems Engineer
>> > chibi@xxxxxxx <javascript:;>        Global OnLine Japan/Rakuten
>> > Communications
>> > http://www.gol.com/
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx <javascript:;>
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux