Re: Cephfs: proportion of data between data pool and metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 23, 2015 at 12:55 AM, Steffen W Sørensen <stefws@xxxxxx> wrote:
>> But in the menu, the use case "cephfs only" doesn't exist and I have
>> no idea of the %data for each pools metadata and data. So, what is
>> the proportion (approximatively) of %data between the "data" pool and
>> the "metadata" pool of cephfs in a cephfs-only cluster?
>>
>> Is it rather metadata=20%, data=80%?
>> Is it rather metadata=10%, data=90%?
>> Is it rather metadata= 5%, data=95%?
>> etc.
> Assuming miles vary here, depending on your ratio between number of entries in your Ceph FS vs their sizes, eg. many small files vs few large ones.
> So you are properly the best one to estimate this your self :)


Yeah. The metadata pool will contain:
1) MDS logs, which I think by default will take up to 200MB per
logical MDS. (You should have only one logical MDS.)
2) directory metadata objects, which contain the dentries and inodes
of the system; ~4KB is probably generous for each?
3) Some smaller data structures about the allocated inode range and
current client sessions.

The data pool contains all of the file data. Presumably this is much
larger, but it will depend on your average file size and we've not
done any real study of it.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux