Re: Storage usage of CephFS-MDS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
<freyermuth@xxxxxxxxxxxxxxxxxx> wrote:
> Looking with:
> ceph daemon osd.2 perf dump
> I get:
>     "bluefs": {
>         "gift_bytes": 0,
>         "reclaim_bytes": 0,
>         "db_total_bytes": 84760592384,
>         "db_used_bytes": 78920024064,
>         "wal_total_bytes": 0,
>         "wal_used_bytes": 0,
>         "slow_total_bytes": 0,
>         "slow_used_bytes": 0,
> so it seems this is almost exclusively RocksDB usage.
>
> Is this expected?

Yes. The directory entries are stored in the omap of the objects. This
will be stored in the RocksDB backend of Bluestore.

> Is there a recommendation on how much MDS storage is needed for a CephFS with 450 TB?

It seems in the above test you're using about 1KB per inode (file).
Using that you can extrapolate how much space the data pool needs
based on your file system usage. (If all you're doing is filling the
file system with empty files, of course you're going to need an
unusually large metadata pool.)

-- 
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux