Just had a go at reproducing this, and yeah, the behaviour is weird.
Our automated testing for cephfs doesn't include any cache tiering, so
this is a useful exercise!
With a writeback overlay cache tier pool on an EC pool, I write a bunch
of files, then do a rados cache-flush-evict-all, then delete the files
in cephfs. The result is that all the objects are still present in a
"rados ls" on either base or cache pool, but if I try to rm any of them
I get an ENOENT.
Then, finally, when I do another cache-flush-evict-all, now the objects
are all finally disappearing from the df stats (base and cache pool
stats ticking down together).
So intuitively, I guess the cache tier is caching the delete-ness of the
objects, and only later flushing that (i.e. deleting from the base
pool). The object is still "in the cache" on that basis, and presumably
not getting flushed (i.e. deleting in base pool) until usual
timeouts/space limits apply. Maybe we need something to kick delete
flushes to happen much earlier (like, ASAP when the cluster isn't too
busy doing other promotions/evictions).
Sam probably has some more informed thoughts than mine on the expected
behaviour here.
John
On 12/06/2015 16:54, Lincoln Bryant wrote:
Greetings experts,
I've got a test set up with CephFS configured to use an erasure coded
pool + cache tier on 0.94.2.
I have been writing lots of data to fill the cache to observe the
behavior and performance when it starts evicting objects to the
erasure-coded pool.
The thing I have noticed is that after deleting the files via 'rm'
through my CephFS kernel client, the cache is emptied but the objects
that were evicted to the EC pool stick around.
I've attached an image that demonstrates what I'm seeing.
Is this intended behavior, or have I misconfigured something?
Thanks,
Lincoln Bryant
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com