Re: Cephfs: proportion of data between data pool and metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 25 Apr 2015, Gregory Farnum wrote:
> That's odd -- I almost want to think the pg statistics reporting is going
> wrong somehow.
> ...I bet the leveldb/omap stuff isn't being included in the of statistics.
> That could be why and would make sense with what you've got here. :)

Yeah, the pool stats sum up bytes and objects, but not keys (or key 
sizes).

We should probably expand the stats struct to include

uint64_t kv;       // key/value pairs
uint64_t kv_bytes; // key/value bytes (key + value length)

sage


> -GregOn Sat, Apr 25, 2015 at 10:32 AM Adam Tygart <mozes@xxxxxxxxxxx> wrote:
>       cephfs (really ec84pool) is an ec pool (k=8 m=4), cachepool is a
>       writeback cachetier in front of ec84pool. As far as I know,
>       we've not
>       done any strange configuration.
> 
>       Potentially relevant configuration details:
>       ceph osd crush dump >
>       http://people.beocat.cis.ksu.edu/~mozes/ceph/crush_dump.txt
>       ceph osd pool ls detail >
>       http://people.beocat.cis.ksu.edu/~mozes/ceph/pool_ls_detail.txt
>       ceph mds dump >
>       http://people.beocat.cis.ksu.edu/~mozes/ceph/mds_dump.txt
>       getfattr -d -m '.*' /tmp/cephfs >
>       http://people.beocat.cis.ksu.edu/~mozes/ceph/getfattr_cephfs.txt
> 
>       rsync is ongoing, moving data into cephfs. It would seem the
>       data is
>       truly there, both with metadata and file data. md5sums match for
>       files
>       that I've tested.
>       --
>       Adam
> 
>       On Sat, Apr 25, 2015 at 12:16 PM, Gregory Farnum
>       <greg@xxxxxxxxxxx> wrote:
>       > That doesn't make sense -- 50MB for 36 million files is <1.5
>       bytes each. How
>       > do you have things configured, exactly?
>       >
>       > On Sat, Apr 25, 2015 at 9:32 AM Adam Tygart
>       <mozes@xxxxxxxxxxx> wrote:
>       >>
>       >> We're currently putting data into our cephfs pool (cachepool
>       in front
>       >> of it as a caching tier), but the metadata pool contains
>       ~50MB of data
>       >> for 36 million files. If that were an accurate estimation,
>       we'd have a
>       >> metadata pool closer to ~140GB. Here is a ceph df detail:
>       >>
>       >> http://people.beocat.cis.ksu.edu/~mozes/ceph_df_detail.txt
>       >>
>       >> I'm not saying it won't get larger, I have no idea of the
>       code behind
>       >> it. This is just what it happens to be for us.
>       >> --
>       >> Adam
>       >>
>       >>
>       >> On Sat, Apr 25, 2015 at 11:29 AM, François Lafont
>       <flafdivers@xxxxxxx>
>       >> wrote:
>       >> > Thanks Greg and Steffen for your answer. I will make some
>       tests.
>       >> >
>       >> > Gregory Farnum wrote:
>       >> >
>       >> >> Yeah. The metadata pool will contain:
>       >> >> 1) MDS logs, which I think by default will take up to
>       200MB per
>       >> >> logical MDS. (You should have only one logical MDS.)
>       >> >> 2) directory metadata objects, which contain the dentries
>       and inodes
>       >> >> of the system; ~4KB is probably generous for each?
>       >> >
>       >> > So one file in the cephfs generates one inode of ~4KB in
>       the
>       >> > "metadata" pool, correct? So that
>       (number-of-files-in-cephfs) x 4KB
>       >> > gives me an (approximative) estimation of the amount of
>       data in the
>       >> > "metadata" pool?
>       >> >
>       >> >> 3) Some smaller data structures about the allocated inode
>       range and
>       >> >> current client sessions.
>       >> >>
>       >> >> The data pool contains all of the file data. Presumably
>       this is much
>       >> >> larger, but it will depend on your average file size and
>       we've not
>       >> >> done any real study of it.
>       >> >
>       >> > --
>       >> > François Lafont
>       >> > _______________________________________________
>       >> > ceph-users mailing list
>       >> > ceph-users@xxxxxxxxxxxxxx
>       >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>       >> _______________________________________________
>       >> ceph-users mailing list
>       >> ceph-users@xxxxxxxxxxxxxx
>       >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux