Re: rbd snapshot slow restore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
On 17 December 2014 at 04:50, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
> There are really only two ways to do snapshots that I know of and they have
> trade-offs:
>
> COW into the snapshot (like VMware, Ceph, etc):
>
> When a write is committed, the changes are committed to a diff file and the
> base file is left untouched. This only has a single write penalty,

This is when you are accessing the snapshot image?

I suspect I'm probably looking at this differently - when I take a snapshot
I never access it "live", I only ever restore it - would that be merging it
back into the base?

I'm not sure what you mean by this. If you take a snapshot then you technically only work on the snapshot. If in VMware (sorry, most of my experience comes from VMware, but I believe KVM is the same) you take a snapshot, then the VM immediately uses the snapshot for all the writes/reads. You then have three options: 1. keep the snapshot indefinitely, 2. revert back to the snapshot point, or 3. delete the snapshot and merge the changes into the base to make it permanent.

In case "2" the reverting of the snapshot is fast because it only deletes the diff file and points back to the original base disk ready to make a new diff file.

In case "3" depending on how much write activity to "new" blocks have happened, then it may take a long time to copy the blocks into the base disk.

Rereading your previous post, I understand that you are using rbd snapshots and then using the rbd rollback command. You are testing this performance vs. the rollback feature in QEMU/KVM when on local/NFS disk. Is that accurate?

I haven't used the rollback feature. If you want to go back to a snapshot, would it be faster to create a clone off the snapshot, then run your VM off that, then just delete and recreate the clone?

rbd snap create rbd/test-image@snap1
rbd snap protect rbd/test-image@snap1
rbd clone rbd/test-image@snap1 rbd/test-image-snap1

You can then run:

rbd rm rbd/test-image-snap1
rbd clone rbd/test-image@snap1 rbd/test-image-snap1

to revert back to the original snapshot.


Whereabout does qcow2 fall on this spectrum?

I think qcow2 falls into the same category as VMware, but I'm still cutting my teeth on QEMU/KVM. 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux