Mysterious cache-tier flushing behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello devs and other sage(sic) people,

Ceph 0.94.5, cache tier in writeback mode.

As mentioned before, I'm running a cron job every day at 23:40 dropping
the flush dirty target by 4% (0.60 to 0.56) and then re-setting it to the
previous value 10 minutes later.
The idea is to have all the flushing done during off-peak hours and that
works beautifully.
No flushes during day time, only lightweight evicts.

Now I'm graphing all kinds of Ceph and system related info with graphite
and noticed something odd.

When the flushes are initiated, the HDD space of the OSDs in the backing
store drops by a few GB, pretty much the amount of dirty objects over the
threshold accumulated during a day, so no surprise there. 
This happens every time when that cron job runs.

However only on some days this drop (more pronounced on those days) is
accompanied by actual:
a) flushes according to the respective Ceph counters
b) network traffic from the cache-tier to the backing OSDs
c) HDD OSD writes (both from OSD perspective and actual HDD)
d) cache pool SSD reads (both from OSD perspective and actual SSD)

So what is happening on the other days?

The space clearly is gone and triggered by the "flush", but no data was
actually transfered to the HDD OSD nodes, nor was there anything (newly)
written.

Dazed and confused,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux