some snapshot problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



---------- Forwarded message ----------
From: liu yaqi <liuyaqiyaqi@xxxxxxxxx>
Date: 2012/11/12
Subject: Re: some snapshot problems
To: Sage Weil <sage@xxxxxxxxxxx>
抄送: ceph-devel@xxxxxxxxxxxxxxx, ctpm@xxxxxxxxxx, josh.durgin@xxxxxxxxxxx




2012/11/9 Sage Weil <sage@xxxxxxxxxxx>
>
> Lots of different snapshots:
>
>  - librados lets you do 'selfmanaged snaps' in its API, which let an
>    application control which snapshots apply to which objects.
>  - you can create a 'pool' snapshot on an entire librados pool.  this
>    cannot be used at the same time as rbd, fs, or the above 'selfmanaged'
>    snaps.
>  - rbd let's you snapshot block device images (by usuing the librados
>    selfmanaged snap API).
>  - the ceph file system let's you snapshot any subdirectory (again
>    utilizing the underlying RADOS functionality).


I am confused about the concept of "pool" and "image". Is one pool a
set of placement groups? When I snap an image, does it mean a snapshot
of one disk?

I think snapshot is used to preserve the state of directory at one
time, and I wander if there will be a situation that I preserve the
data of the directory, but does not preserve the metadata of the
directory――maybe because metadata and data not in the same pool, would
this happen?

When snap directory, I trace the code in mds, there is snapinfo added
to inode, but where and when to create the content of the snap? What
is the data sturcture of the snap content? When client set inode
attribute, if snapid==NOSNAP, return, does this mean if the inode has
been snapped, it can not be changed? So, snap not using the
copy-on-write method(create snap, then change the content of snapfile
when set inode attribute or write the file)? If not copy-on-write,
what's the snap workflow for directory?

There are multi metadata nodes, one directory may  spread over multi
servers, one server has one part of the dir, how ceph resolves this
problem? This also cause clock problem.

Thanks
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux