Re: Cephfs default data pool (inode backtrace) no longer a thing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are you sure there are no objects? Here is what it looks on our FS:

    NAME                     ID     USED        %USED     MAX AVAIL     OBJECTS
    con-fs2-meta1            12     474 MiB      0.04       1.0 TiB      35687606
    con-fs2-meta2            13         0 B         0       1.0 TiB     300163323

Meta1 is the meta-data pool and meta2 the default data pool. It shows 0 bytes, but contains 10x the objects that sit in the meta data pool. These objects contain only meta data. That's why no actual usage is reported (at least on mimic).

The data in this default data pool is a serious challenge for recovery. I put it on fast SSDs, but the large number of objects requires aggressive recovery options. With the default settings recovery of this pool takes longer than the rebuild of data in the EC data pools on HDD. I also allocated lots of PGs to it to reduce the object count per PG. Having this data on fast drives with tuned settings helps a lot with overall recovery and snaptrim.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
Sent: 15 March 2022 20:53:25
To: ceph-users
Subject:  Cephfs default data pool (inode backtrace) no longer a thing?

Hello

https://docs.ceph.com/en/latest/cephfs/createfs/ mentions a
"default data pool" that is used for "inode backtrace
information, which is used for hard link management and
disaster recovery", and "all CephFS inodes have at least one
object in the default data pool".

I noticed that when I create a volume using "ceph fs volume
create" and then add the EC data pool where my files
actually are, the default pool remains empty (no objects).

Does this mean that the recommendation from the link above
"If erasure-coded pools are planned for file system data, it
is best to configure the default as a replicated pool" is no
longer applicable, or do I need to configure something to
avoid a performance hit when using EC data pools?


Thanks

Vlad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux