Re: Can not disable rbd cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> My guess would be that if you are already running hammer on the client it is
> already using the new watcher API. This would be a fix on the OSDs to allow
> the object to be moved because the current client is smart enough to try
> again. It would be watchers per object.
> Sent from a mobile device, please excuse any typos.
> On Feb 25, 2016 9:10 PM, "Christian Balzer" < chibi@xxxxxxx > wrote:

Correct.  The watch is per (RBD image header) object and the code change would need to be on the OSD side.

> > On Thu, 25 Feb 2016 10:07:37 -0500 (EST) Jason Dillaman wrote:
> 

> > > > > Let's start from the top. Where are you stuck with [1]? I have
> 
> > > > > noticed that after evicting all the objects with RBD that one object
> 
> > > > > for each active RBD is still left, I think this is the head object.
> 
> > > > Precisely.
> 
> > > > That came up in my extensive tests as well.
> 
> > >
> 
> > > Is this in reference to the RBD image header object (i.e. XYZ.rbd or
> 
> > > rbd_header.XYZ)?
> 
> > Yes.
> 

> > > The cache tier doesn't currently support evicting
> 
> > > objects that are being watched. This guard was added to the OSD because
> 
> > > it wasn't previously possible to alert clients that a watched object had
> 
> > > encountered an error (such as it no longer exists in the cache tier).
> 
> > > Now that Hammer (and later) librbd releases will reconnect the watch on
> 
> > > error (eviction), perhaps this guard can be loosened [1].
> 
> > >
> 
> > > [1] http://tracker.ceph.com/issues/14865
> 
> > >
> 

> > How do I interpret "all watchers" in the issue above?
> 
> > As in, all watchers of an object, or all watchers in general.
> 

> > If it is per object (which I guess/hope), than this fix would mean that
> 
> > after an upgrade to Hammer or later on the client side a restart of the VM
> 
> > would allow the header object to be evicted, while the header objects for
> 
> > VMs that have been running since the dawn of time can not.
> 

> > Correct?
> 

> > This would definitely be better than having to stop the VM, flush things
> 
> > and then start it up again.
> 

> > Christian
> 

> > > --
> 
> > >
> 
> > > Jason
> 
> > >
> 

> > --
> 
> > Christian Balzer Network/Systems Engineer
> 
> > chibi@xxxxxxx Global OnLine Japan/Rakuten Communications
> 
> > http://www.gol.com/
> 
> > _______________________________________________
> 
> > ceph-users mailing list
> 
> > ceph-users@xxxxxxxxxxxxxx
> 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux