Re: help me understand ceph snapshot (also for gfarnum@xxxxxxxxxx)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

When you take a snapshot, it's not that the volume retains the data
and anything that gets modified is copied to the snapshot. Rather,
internally, all of the current data becomes associated with that
snapshot, and reads to the volume for data that has not yet been
modified are redirected to that snapshot layer. Once you write to the
volume, the relevant object is copy-on-written to the volume layer. If
you then take another snapshot, anything that has been copy-on-written
to that point becomes associated with that new snapshot layer.

HTH,
Josh

On Thu, Feb 29, 2024 at 3:45 AM garcetto <garcetto@xxxxxxxxx> wrote:
>
> my problem is WHY the first snap is the SAME size of the original volume...if i DONT write/update nothing on the original volume SO the COW should dont copy nothing to the snap...so why it starts with FULL original volume size?
> (i have check with du and info and the snap is full used)
> thank you
>
> On Thu, Feb 29, 2024 at 9:39 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>>
>> Den tors 29 feb. 2024 kl 08:48 skrev garcetto <garcetto@xxxxxxxxx>:
>> >
>> > good morning,
>> >   i am trying to understand how ceph snapshot works.
>> > i have read that snap are cow, which means if i am correct that if a new write update an exising block on a volume, the "old" block is copied to snap before overwrite it on original volume, am i right?
>> > so, i creted a volume say 10 GB in size, empty, then created a snap.
>> > so, coming to my doubt, why the snap is 10GB in size? it should be 0, because no new write update were done, am i right?
>>
>> This could be dependent on how you ask about the size. The snapshot
>> should present itself as having 10G in size, if you as a consumer of
>> it asks how large it is. If you run something like "rbd info <name>"
>> and "rbd du <name>" you should be able to see the differences between
>> its apparent size and how much storage it consumes on the cluster.
>>
>> rbd du -p glance-images 12345678-0e1e-4d51-abcd-484604d1df0a
>> NAME                                      PROVISIONED  USED
>> 12345678-0e1e-4d51-abcd-484604d1df0a@snap       40GiB 40GiB
>> 12345678-0e1e-4d51-abcd-484604d1df0a            40GiB    0B
>> <TOTAL>                                         40GiB 40GiB
>>
>> --
>> May the most significant bit of your life be positive.
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux