Re: Turn snapshot of a flattened snapshot into regular image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I created the same scenario again (IDs have changed), and I executed the info command during the different stages.

base image:

de4e1e90-7e81-4518-8558-f9eb1cfd3df8 | Test-SLE12SP1

ceph@node1:~/ceph-deploy> rbd -p images --image de4e1e90-7e81-4518-8558-f9eb1cfd3df8 info
rbd image 'de4e1e90-7e81-4518-8558-f9eb1cfd3df8':
        size 5120 MB in 640 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.d11b11c7e3c01
        format: 2
        features: layering, striping, exclusive, object map
        flags:
        stripe unit: 8192 kB
        stripe count: 1

Launched instance:

ceph@node1:~/ceph-deploy> rbd -p images --image de4e1e90-7e81-4518-8558-f9eb1cfd3df8 snap ls
SNAPID NAME    SIZE
   447 snap 5120 MB
ceph@node1:~/ceph-deploy> rbd -p images --image de4e1e90-7e81-4518-8558-f9eb1cfd3df8 children --snap snap
images/268e932f-f6e1-4805-ace5-d64c81fa49b3_disk

images/268e932f-f6e1-4805-ace5-d64c81fa49b3_disk

is the instance based on Test-SLE12SP1. I installed one package and took a snapshot of that instance:

ec2a3ae0-236d-478d-9665-bef8fe909610 | snap-vm1

ceph@node1:~/ceph-deploy> rbd -p images --image ec2a3ae0-236d-478d-9665-bef8fe909610 info
rbd image 'ec2a3ae0-236d-478d-9665-bef8fe909610':
        size 20480 MB in 2560 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.db5694dec62eb
        format: 2
        features: layering, striping, exclusive, object map
        flags:
        stripe unit: 8192 kB
        stripe count: 1

ceph@node1:~/ceph-deploy> rbd -p images --image ec2a3ae0-236d-478d-9665-bef8fe909610 snap ls
SNAPID NAME     SIZE
   450 snap 20480 MB
ceph@node1:~/ceph-deploy> rbd -p images --image ec2a3ae0-236d-478d-9665-bef8fe909610 children --snap snap
ceph@node1:~/ceph-deploy>

Now I delete vm1, output of rbd info for base image:

ceph@node1:~/ceph-deploy> rbd -p images --image de4e1e90-7e81-4518-8558-f9eb1cfd3df8 info
rbd image 'de4e1e90-7e81-4518-8558-f9eb1cfd3df8':
        size 5120 MB in 640 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.d11b11c7e3c01
        format: 2
        features: layering, striping, exclusive, object map
        flags:
        stripe unit: 8192 kB
        stripe count: 1

I find no difference to the previous output. I delete the base image, rbd info for snap-vm1:

ceph@node1:~/ceph-deploy> rbd -p images --image ec2a3ae0-236d-478d-9665-bef8fe909610 info
rbd image 'ec2a3ae0-236d-478d-9665-bef8fe909610':
        size 20480 MB in 2560 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.db5694dec62eb
        format: 2
        features: layering, striping, exclusive, object map
        flags:
        stripe unit: 8192 kB
        stripe count: 1

This is also the same output like before. Does this help understanding this in any way? I don't know if the features config has anything to do with this behaviour, but here is the extract of ceph.conf if it helps.

ceph@node1:~/ceph-deploy> cat ceph.conf
[global]
fsid = 655cb05a-435a-41ba-83d9-8549f7c36167
osd_pool_default_size = 2
osd_crush_chooseleaf_type = 1
mon_pg_warn_max_per_osd = 0
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
rbd_default_features = 15

Thanks again for looking into this!

Regards,
Eugen

Zitat von Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>:

You can use 'rbd -p images --image 417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560 info' to see the parentage of your cloned RBD from Ceph's perspective. It seems like that could be useful at various times throughout this test to determine what glance is doing under the covers.


________________________________

[cid:imagebc1a87.JPG@004db369.419de911]<https://storagecraft.com> Steve Taylor | Senior Software Engineer | StorageCraft Technology Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |

________________________________

If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.

________________________________

-----Original Message-----
From: Eugen Block [mailto:eblock@xxxxxx]
Sent: Friday, September 2, 2016 7:12 AM
To: Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Turn snapshot of a flattened snapshot into regular image

Something isn't right. Ceph won't delete RBDs that have existing
snapshots

That's what I thought, and I also noticed that in the first test, but not in the second.

The clone becomes a cinder device that is then attached to the nova instance.

This is one option, but I don't use it. nova would create a cinder volume if I executed "nova boot --block-device ...", but I don't, so there's no cinder involved. I'll try to provide some details from openstack and ceph, maybe that helps to find the cause.

So I created a glance image
control1:~ #  glance image-list | grep Test
| 87862452-5872-40c9-b657-f5fec0d105c5 | Test2-SLE12SP1

which automatically gets one snapshot in rbd and has no children yet, because no VM has been launched yet:

ceph@node1:~/ceph-deploy> rbd -p images --image
87862452-5872-40c9-b657-f5fec0d105c5 snap ls
SNAPID NAME    SIZE
   429 snap 5120 MB

ceph@node1:~/ceph-deploy> rbd -p images --image
87862452-5872-40c9-b657-f5fec0d105c5 children --snap snap ceph@node1:~/ceph-deploy>

Now I boot a VM

nova boot --flavor 2 --image 87862452-5872-40c9-b657-f5fec0d105c5
--nic net-id=4eafc4da-a3cd-4def-b863-5fb8e645e984 vm1

with a resulting instance_uuid=0e44badb-8a76-41d8-be43-b4125ffc6806
and see this in ceph:

ceph@node1:~/ceph-deploy> rbd -p images --image
87862452-5872-40c9-b657-f5fec0d105c5 children --snap snap images/0e44badb-8a76-41d8-be43-b4125ffc6806_disk

So I have the base image with a snapshot, and based on this snapshot a child which is the disk image for my instance. There is no cinder
volume:

control1:~ #  cinder list
+----+--------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+----+--------+------+------+-------------+----------+-------------+
+----+--------+------+------+-------------+----------+-------------+

Now I create a snapshot of vm1 (I removed some lines to focus on the IDs):

control1:~ #  nova image-show 417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560
+-------------------------+------------------------------------------------------------------+
| Property                | Value
                      |
+-------------------------+------------------------------------------------------------------+
| id                      | 417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560
                      |
| metadata base_image_ref | 87862452-5872-40c9-b657-f5fec0d105c5
                      |
| metadata image_type     | snapshot
                      |
| metadata instance_uuid  | 0e44badb-8a76-41d8-be43-b4125ffc6806
                      |
| name                    | snap-vm1
                      |
| server                  | 0e44badb-8a76-41d8-be43-b4125ffc6806
                      |
| status                  | ACTIVE
                      |
| updated                 | 2016-09-02T12:51:28Z
                      |
+-------------------------+------------------------------------------------------------------+

In rbd there is a new object now, without any children:

ceph@node1:~/ceph-deploy> rbd -p images --image
417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560 snap ls
SNAPID NAME     SIZE
   443 snap 20480 MB
ceph@node1:~/ceph-deploy> rbd -p images --image
417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560 children --snap snap ceph@node1:~/ceph-deploy>

And there's still no cinder volume ;-)
After removing vm1 I can delete the base image and snap-vm1:

control1:~ #  nova delete vm1
Request to delete server vm1 has been accepted.
control1:~ #  glance image-delete 87862452-5872-40c9-b657-f5fec0d105c5
control1:~ #
control1:~ #  glance image-delete 417ef4b6-b4b2-4e94-9ae6-ef7a4ee3e560

I did not flatten any snapshot yet, this is really strange! It seems as if the nova snapshot creates a full image (flattened) so it doesn't depend on the base image. But I didn't change any configs or anything, I really don't understand it. Please let me know if any additional information would help on this.

Regards,
Eugen


Zitat von Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>:

Something isn't right. Ceph won't delete RBDs that have existing
snapshots, even when those snapshots aren't protected. You can't
delete a snapshot that's protected, and you can't unprotect a snapshot
if there is a COW clone that depends on it.

I'm not intimately familiar with OpenStack, but it must be deleting A
without any snapshots. That would seem to indicate that at the point
of deletion there are no COW clones of A or that any clone is no
longer dependent on A. A COW clone requires a protected snapshot, a
protected snapshot can't be deleted, and existing snapshots prevent
RBDs from being deleted.

In my experience with OpenStack, booting a nova instance from a glance
image causes a snapshot to be created, protected, and cloned on the
RBD for the glance image. The clone becomes a cinder device that is
then attached to the nova instance. Thus you're able to modify the
contents of the volume within the instance. You wouldn't be able to
delete the glance image at that point unless the cinder device were
deleted first or it was flattened and no longer dependent on the
glance image. I haven't performed this particular test. It's possible
that OpenStack does the flattening for you in this scenario.

This issue will likely require some investigation at the RBD level
throughout your testing process to understand exactly what's
happening.


________________________________

[cid:image5feece.JPG@7cacebfd.42833f4d]<https://storagecraft.com>
   Steve Taylor | Senior Software Engineer | StorageCraft Technology
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799

________________________________

If you are not the intended recipient of this message or received it
erroneously, please notify the sender and delete it, together with
any attachments, and be advised that any dissemination or copying of
this message is prohibited.

________________________________

-----Original Message-----
From: Eugen Block [mailto:eblock@xxxxxx]
Sent: Thursday, September 1, 2016 9:06 AM
To: Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Turn snapshot of a flattened snapshot into
regular image

Thanks for the quick response, but I don't believe I'm there yet ;-)

cloned the glance image to a cinder device

I have configured these three services (nova, glance, cinder) to use
ceph as storage backend, but cinder is not involved in this process
I'm referring to.

Now I wanted to reproduce this scenario to show a colleague, and
couldn't because now I was able to delete image A even with a
non-flattened snapshot! How is that even possible?

Eugen



Zitat von Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>:

You're already there. When you booted ONE you cloned the glance image
to a cinder device (A', separate RBD) that was a COW clone of A.
That's why you can't delete A until you flatten SNAP1. A' isn't a full
copy until that flatten is complete, at which point you're able to
delete A.

SNAP2 is a second snapshot on A', and thus A' already has all of the
data it needs from the previous flatten of SNAP1 to allow you to
delete SNAP1. So SNAP2 isn't actually a full extra copy of the data.


________________________________

[cid:imagef01287.JPG@753835fa.45a0b2c0]<https://storagecraft.com>
   Steve Taylor | Senior Software Engineer | StorageCraft Technology
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799

________________________________

If you are not the intended recipient of this message or received it
erroneously, please notify the sender and delete it, together with
any attachments, and be advised that any dissemination or copying of
this message is prohibited.

________________________________

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
Behalf Of Eugen Block
Sent: Thursday, September 1, 2016 6:51 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Turn snapshot of a flattened snapshot into
regular image

Hi all,

I'm trying to understand the idea behind rbd images and their
clones/snapshots. I have tried this scenario:

1. upload image A to glance
2. boot instance ONE from image A
3. make changes to instance ONE (install new package) 4. create
snapshot SNAP1 from ONE 5. delete instance ONE 6. delete image A
   deleting image A fails because of existing snapshot SNAP1 7.
flatten snapshot SNAP1 8. delete image A
   succeeds
9. launch instance TWO from SNAP1
10. make changes to TWO (install package) 11. create snapshot SNAP2
from TWO 12. delete TWO 13. delete SNAP1
    succeeds

This means that the second snapshot has the same (full) size as the
first. Can I manipulate SNAP1 somehow so that snapshots are not
flattened anymore and SNAP2 becomes a cow clone of SNAP1?

I hope my description is not too confusing. The idea behind this
question is, if I have one base image and want to adjust that image
from time to time, I don't want to keep several versions of that
image, I just want one. But this way i would lose the protection
from deleting the base image.

Is there any config option in ceph or Openstack or anything else I
can do to "un-flatten" an image? I would assume that there is some
kind of flag set for that image. Maybe someone can point me to the
right direction.

Thanks,
Eugen

--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983



--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983



--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux