Re: some snapshot problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 11, 2012 at 11:02 PM, liu yaqi <liuyaqiyaqi@xxxxxxxxx> wrote:
> 2012/11/9 Sage Weil <sage@xxxxxxxxxxx>
>
>> Lots of different snapshots:
>>
>>  - librados lets you do 'selfmanaged snaps' in its API, which let an
>>    application control which snapshots apply to which objects.
>>  - you can create a 'pool' snapshot on an entire librados pool.  this
>>    cannot be used at the same time as rbd, fs, or the above 'selfmanaged'
>>    snaps.
>>  - rbd let's you snapshot block device images (by usuing the librados
>>    selfmanaged snap API).
>>  - the ceph file system let's you snapshot any subdirectory (again
>>    utilizing the underlying RADOS functionality).
>
> I am confused about the concept of "pool" and "image". Is one pool a
> set of placement groups? When I snap an image, does it mean a snapshot
> of one disk?

A pool is a logical namespace into which you place objects. Placement
groups are shards of a pool.
Snapping an image makes use of the self-managed snapshot
infrastructure, and takes a snapshot of one RBD volume (so yes, if
that's what you meant by a snapshot of one disk).

> I think snapshot is used to preserve the state of directory at one
> time, and I wander if there will be a situation that I preserve the
> data of the directory, but does not preserve the metadata of the
> directory——maybe because metadata and data not in the same pool, would
> this happen?

The Ceph filesystem builds a bit more on top of the RADOS snapshots —
metadata and data are almost never in the same pool, and the metadata
snapshots don't use RADOS snapshots anyway.

> When snap directory, I trace the code in mds, there is snapinfo added
> to inode, but where and when to create the content of the snap? What
> is the data sturcture of the snap content? When client set inode
> attribute, if snapid==NOSNAP, return, does this mean if the inode has
> been snapped, it can not be changed? So, snap not using the
> copy-on-write method(create snap, then change the content of snapfile
> when set inode attribute or write the file)? If not copy-on-write,
> what's the snap workflow for directory?

You want to look into the code surrounding SnapRealms to see how the
metadata for snapshots is managed.

> There are multi metadata nodes, one directory may  spread over multi
> servers, one server has one part of the dir, how ceph resolves this
> problem? This also cause clock problem.

It's not easy, but again, look at how the SnapRealms are dealt with.
The MDSes will do synchronous notifications to each other.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux