Re: cephfs inode backtrace information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The docs recommend a fast SSD pool for the CephFS *metadata*, but the
default data pool can be more flexible. The backtraces are relatively
small — it's an encoded version of the path an inode is located at,
plus the RADOS hobject, which is probably more of the space usage. So
it should fit fine in your SSD pool, but if all the cephfs file data
is living in the hard drive pool I'd just set it up there.
-Greg

On Tue, Jan 30, 2024 at 2:03 AM Dietmar Rieder
<dietmar.rieder@xxxxxxxxxxx> wrote:
>
> Hello,
>
> I have a question regarding the default pool of a cephfs.
>
> According to the docs it is recommended to use a fast ssd replicated
> pool as default pool for cephfs. I'm asking what are the space
> requirements for storing the inode backtrace information?
>
> Let's say I have a 85 TiB replicated ssd pool (hot data) and as 3 PiB EC
> data pool (cold data).
>
> Does it make sense to create a third pool as default pool which only
> holds the inode backtrace information (what would be a good size), or is
> it OK to use the ssd pool as default pool?
>
> Thanks
>     Dietmar
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux