On Thu, 25 Feb 2016 10:07:37 -0500 (EST) Jason Dillaman wrote: > > > Let's start from the top. Where are you stuck with [1]? I have > > > noticed that after evicting all the objects with RBD that one object > > > for each active RBD is still left, I think this is the head object. > > Precisely. > > That came up in my extensive tests as well. > > Is this in reference to the RBD image header object (i.e. XYZ.rbd or > rbd_header.XYZ)? Yes. > The cache tier doesn't currently support evicting > objects that are being watched. This guard was added to the OSD because > it wasn't previously possible to alert clients that a watched object had > encountered an error (such as it no longer exists in the cache tier). > Now that Hammer (and later) librbd releases will reconnect the watch on > error (eviction), perhaps this guard can be loosened [1]. > > [1] http://tracker.ceph.com/issues/14865 > How do I interpret "all watchers" in the issue above? As in, all watchers of an object, or all watchers in general. If it is per object (which I guess/hope), than this fix would mean that after an upgrade to Hammer or later on the client side a restart of the VM would allow the header object to be evicted, while the header objects for VMs that have been running since the dawn of time can not. Correct? This would definitely be better than having to stop the VM, flush things and then start it up again. Christian > -- > > Jason > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com