Re: metadata management in case of ceph object storage and ceph block storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 17/04/2015, at 07.33, Josef Johansson <josef86@xxxxxxxxx> wrote:

To your question, which I’m not sure I understand completely.

So yes, you don’t need the MDS if you just keep track of block storage and object storage. (i.e. images for KVM)

So the Mon keeps track of the metadata for the Pool and PG
Well there really ain’t no metadata at all as with a traditional File System, monitors keep track of status of OSDs. Client compute which OSDs to go talk to to get to wanted objects. thus no need for central meta data service to tell clients where data are stored. Ceph is a distributed object storage system with potential no SPF and ability to scale out.
Try studying Ross’ slides f.ex. here:
or many other good intros on the net, youtube etc.

Clients of a Ceph Cluster can access ‘objects’ (blobs with data) through several means, programatic with librados, as virtual block devices through librbd+librados, and finally as a S3 service through rados GW over http[s]  the meta data (users + ACLs, buckets+data…) for S3 objects are stored in various pools in Ceph.

CephFS built on top of a Ceph object store can best be compared with combination of a POSIX File System and other Networked File Systems f.ex. NFS,CiFS, AFP, only with a different protocol + access mean (FUSE daemon or kernel module). As it implements a regular file name space, it needs to store meta data of which files exist in such a name space, this is the job of the MDS server(s) which of course uses Ceph object store pools to persistent store this file system meta data info


and the MDS keep track of all the files, hence the MDS should have at least 10x the memory of what the Mon have.
Hmm 10x memory isn’t a rule of thumb in my book, it all depends of use case at hand.
MDS tracks meta data of files stored in a CephFS, which usually is far from all data of a cluster unless CephFS is the only usage of course :)
Many use Ceph for sharing virtual block devices among multiple Hypervisors as disk devices for virtual machines (VM images), f.ex. with Openstack, Proxmox etc.
 

I’m no Ceph expert, especially not on CephFS, but this is my picture of it :)

Maybe the architecture docs could help you out? http://docs.ceph.com/docs/master/architecture/#cluster-map

Hope that resolves your question.

Cheers,
Josef

On 06 Apr 2015, at 18:51, pragya jain <prag_2648@xxxxxxxxxxx> wrote:

Please somebody reply my queries.

Thank yuo
 
-----
Regards
Pragya Jain
Department of Computer Science
University of Delhi
Delhi, India



On Saturday, 4 April 2015 3:24 PM, pragya jain <prag_2648@xxxxxxxxxxx> wrote:


hello all!

As the documentation said "One of the unique features of Ceph is that it decouples data and metadata".
for applying the mechanism of decoupling, Ceph uses Metadata Server (MDS) cluster.
MDS cluster manages metadata operations, like open or rename a file

On the other hand, Ceph implementation for object storage as a service and block storage as a service does not require MDS implementation.

My question is:
In case of object storage and block storage, how does Ceph manage the metadata?

Please help me to understand this concept more clearly.

Thank you
 
-----
Regards
Pragya Jain
Department of Computer Science
University of Delhi
Delhi, India


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux