Re: RBD Watch Notify for snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Ilya Dryomov [mailto:idryomov@xxxxxxxxx]
> Sent: 22 August 2016 15:00
> To: Jason Dillaman <dillaman@xxxxxxxxxx>
> Cc: Nick Fisk <nick@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  RBD Watch Notify for snapshots
> 
> On Fri, Jul 8, 2016 at 5:02 AM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
> > librbd pseudo-automatically handles this by flushing the cache to the
> > snapshot when a new snapshot is created, but I don't think krbd does
> > the same. If it doesn't, it would probably be a nice addition to the
> > block driver to support the general case.
> >
> > Baring that (or if you want to involve something like fsfreeze), I
> > think the answer depends on how much you are willing to write some
> > custom C/C++ code (I don't think the rados python library exposes
> > watch/notify APIs). A daemon could register a watch on a custom
> > per-host/image/etc object which would sync the disk when a
> > notification is received. Prior to creating a snapshot, you would need
> > to send a notification to this object to alert the daemon to sync/fsfreeze/etc.
> 
> If there is a filesystem on top of /dev/rbdX, which isn't suspended, how would krbd driver flushing the page cache help?  In order for
> the block device level snapshot to be consistent, the filesystem needs to be quiesced - fsfreeze or something resembling it is the only
> answer here.

I'm guessing whatever your virtualisation/backup software is, that it communicates with the qemu guest agent to call fsfreeze. That’s assuming librbd is being used with qemu in this scenario. The question is should the storage layer be able to inititate this, or is it best to come from hypervisor/backup software.

But I agree, I think there is a difference to the block device being consistent in terms of flushing any caches vs the contents of it. I guess you also have another layer, being the applications, they potentially also need to be informed that there will be a snapshot taken so they can flush any application buffers.

> 
> Thanks,
> 
>                 Ilya

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux