Re: cephfs inode backtrace information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



from my observation, Ceph uses ~512 Bytes per inode.

So what matters is not the TiB of your EC pool, but the number of bytes.

(Again, this is concluded from observation on Ceph 18.2.1, and the fact that it's called "inodes"; I have not checked the code.)

Example from my cluster which has "data_ec" as EC 4+2, and "data" ebing the "default" pool for inode backtrace information (from, and nothing else:

    .mgr       1    1  203 MiB       26  609 MiB   90.00      5 GiB
    data       2   32      0 B  112.23M      0 B       0     61 TiB
    data_ec    3  168  124 TiB  115.30M  186 TiB   50.53    121 TiB
    metadata   4  128   63 GiB   32.87k  189 GiB   90.00      5 GiB

The odd thing here is that the 112 M inodes count as 0 Bytes.

This messes up PG autoscaling, I filed an issue about that here:

I would probably create a pool just for this inode information, simply so that you can see it separately, and easily migrate it to different storage if you change your mind.

I do not understand your "what would be a good size?" question, since if you create multiple pools that use SSDs, your SSD OSDs will automatically be used and the remaining space will be shared across all of your SSD pools anyway -- you do not have to provision "separate" SSDs for making another SSD pool.

See also my related question:
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux