> -----Original Message----- > From: Jason Dillaman [mailto:jdillama@xxxxxxxxxx] > Sent: 23 August 2016 13:23 > To: Nick Fisk <nick@xxxxxxxxxx> > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: RBD Watch Notify for snapshots > > Looks good. Since you are re-using the RBD header object to send the watch notification, a running librbd client will most likely print > out an error message along the lines of "failed to decode the notification" since you are sending "fsfreeze" / "fsunfreeze" as the > payload, but it would be harmless. Thanks Jason. I will take the comments from you and Ilya and make some improvements. Are there any particular payloads I should look to standardise on? I originally was planning to try and trigger this when the RBD snapshot was taken, but I didn't seem to see any notifies when watching the rbd_id.<rbd> object. Am I not watching the correct object? > > On Mon, Aug 22, 2016 at 9:13 AM, Nick Fisk <nick@xxxxxxxxxx> wrote: > > Hi Jason, > > > > Here is my initial attempt at using the Watch/Notify support to be > > able to remotely fsfreeze a filesystem on a RBD. Please note this was all very new to me and so there will probably be a lot of things > that haven't been done in the best way. > > > > https://github.com/fiskn/rbd_freeze > > > > I'm not sure if calling out to bash scripts is the best way of doing > > the fsfreezing, but it was the easiest way I could think to accomplish the task. And it also allowed me to fairly easily run extra checks > like seeing if any files have been updated recently. > > > > Let me know what you think. > > > > Nick > > > >> -----Original Message----- > >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf > >> Of Nick Fisk > >> Sent: 08 July 2016 09:58 > >> To: dillaman@xxxxxxxxxx > >> Cc: 'ceph-users' <ceph-users@xxxxxxxxxxxxxx> > >> Subject: Re: RBD Watch Notify for snapshots > >> > >> Thanks Jason, > >> > >> I think I'm going to start with a bash script which SSH's into the > >> machine to check if the process has finished writing and then > > calls the > >> fsfreeze as I've got time constraints to getting this working. But I > >> will definitely revisit this and see if there is something I > > can create > >> which will do as you have described, as it would be a much neater solution. > >> > >> Nick > >> > >> > -----Original Message----- > >> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On > >> > Behalf Of Jason Dillaman > >> > Sent: 08 July 2016 04:02 > >> > To: nick@xxxxxxxxxx > >> > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> > >> > Subject: Re: RBD Watch Notify for snapshots > >> > > >> > librbd pseudo-automatically handles this by flushing the cache to > >> > the snapshot when a new snapshot is created, but I don't think krbd > >> > does the same. If it doesn't, it would probably be a nice > > addition to > >> the block driver to support the general case. > >> > > >> > Baring that (or if you want to involve something like fsfreeze), I > >> > think the answer depends on how much you are willing to write some > >> > custom C/C++ code (I don't think the rados python library exposes > >> > watch/notify APIs). A daemon could register a watch on a custom > >> > per-host/image/etc object which would sync the disk when a > >> notification is received. Prior to creating a snapshot, you would > >> need to send a notification to this object to alert the daemon > > to > >> sync/fsfreeze/etc. > >> > > >> > On Thu, Jul 7, 2016 at 12:33 PM, Nick Fisk <mailto:nick@xxxxxxxxxx> wrote: > >> > Hi All, > >> > > >> > I have a RBD mounted to a machine via the kernel client and I wish > >> > to be able to take a snapshot and mount it to another machine where it can be backed up. > >> > > >> > The big issue is that I need to make sure that the process writing > >> > on the source machine is finished and the FS is sync'd before taking the snapshot. > >> > > >> > My question. Is there something I can do with Watch/Notify to > >> > trigger this checking/sync process on the source machine before the snapshot is actually taken? > >> > > >> > Thanks, > >> > Nick > >> > > >> > _______________________________________________ > >> > ceph-users mailing list > >> > mailto:ceph-users@xxxxxxxxxxxxxx > >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > > >> > > >> > > >> > > >> > -- > >> > Jason > >> > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > -- > Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com