On 1/31/24 20:13, Patrick Donnelly wrote:
On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder <dietmar.rieder@xxxxxxxxxxx> wrote:Hello, I have a question regarding the default pool of a cephfs. According to the docs it is recommended to use a fast ssd replicated pool as default pool for cephfs. I'm asking what are the space requirements for storing the inode backtrace information?The actual recommendation is to use a replicated pool for the default data pool. Regular hard drives are fine for the storage device.
yes true that, I was saying ssd because my replicated pool is on ssds.
Let's say I have a 85 TiB replicated ssd pool (hot data) and as 3 PiB EC data pool (cold data). Does it make sense to create a third pool as default pool which only holds the inode backtrace information (what would be a good size), or is it OK to use the ssd pool as default pool?Assuming your 85 TiB rep ssd pool is the default data pool already, use that.
yes, I was planning to uses the 85 TiB rep ssd pool as the default, I was just not sure if it might get substantially filled by inode backtrace information from files that will get stored in the 3 PiB EC pool.
(I am curious why this question is asked now when the file system already has a significant amount of data? Are you thinking about recreating the fs?)
the system is being set up, so no data there yet, but we plan to use the 85 TiB rep pool for user homes and the 3 PiB EC pool for data. We use ceph.dir.layout.pool settings to separate the storage devices/pool for /home and /data.
Dietmar
Attachment:
OpenPGP_signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx