Re: help me understand ceph snapshot (also for gfarnum@xxxxxxxxxx)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Just to elaborate on Janne's earlier post using your example of a 10GB image. Assume you took a snapshot of the image then wrote 4KB after the snapshot...When you connect to the snapshot from a client machine: do you want to see a read only 10GB drive with correct sectors/filesystem formatting that you are able to mount or do you want to see a 4KB data blob that cannot be mounted or viewed from client perspective ?

From a dev perspective, you may be intersected to know how much extra bytes this snapshots  is taking and where/how it is stored..but that is a different view. 

/maged

On 29/02/2024 12:44, garcetto wrote:
my problem is WHY the first snap is the SAME size of the original volume...if i DONT write/update nothing on the original volume SO the COW should dont copy nothing to the snap...so why it starts with FULL original volume size?
(i have check with du and info and the snap is full used)
thank you 

On Thu, Feb 29, 2024 at 9:39 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
Den tors 29 feb. 2024 kl 08:48 skrev garcetto <garcetto@xxxxxxxxx>:
>
> good morning,
>   i am trying to understand how ceph snapshot works.
> i have read that snap are cow, which means if i am correct that if a new write update an exising block on a volume, the "old" block is copied to snap before overwrite it on original volume, am i right?
> so, i creted a volume say 10 GB in size, empty, then created a snap.
> so, coming to my doubt, why the snap is 10GB in size? it should be 0, because no new write update were done, am i right?

This could be dependent on how you ask about the size. The snapshot
should present itself as having 10G in size, if you as a consumer of
it asks how large it is. If you run something like "rbd info <name>"
and "rbd du <name>" you should be able to see the differences between
its apparent size and how much storage it consumes on the cluster.

rbd du -p glance-images 12345678-0e1e-4d51-abcd-484604d1df0a
NAME                                      PROVISIONED  USED
12345678-0e1e-4d51-abcd-484604d1df0a@snap       40GiB 40GiB
12345678-0e1e-4d51-abcd-484604d1df0a            40GiB    0B
<TOTAL>                                         40GiB 40GiB

--
May the most significant bit of your life be positive.

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux