Re: default data pool and cephfs using erasure-coded pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the support of erasure-coded data pools for cephfs was introduced with Luminous [1] using the allow_ec_overwrites flag for the data pool, but the metadata pool has to be replicated. The recommendation according to the docs using EC is to have two replicated pools for metadata and the default data pool and then add more pools to the cephfs:

---snip---
The data pool used to create the file system is the “default” data pool and the location for storing all inode backtrace information, which is used for hard link management and disaster recovery. For this reason, all CephFS inodes have at least one object in the default data pool. If erasure-coded pools are planned for file system data, it is best to configure the default as a replicated pool to improve small-object write and read performance when updating backtraces. Separately, another erasure-coded data pool can be added (see also Erasure code) that can be used on an entire hierarchy of directories and files (see also File layouts).
---snip---

These pools are required for each cephfs, not the whole cluster. But it should also work if your default data pool is EC, just the performance could be an issue as stated above.

Regards,
Eugen

[1] https://ceph.io/en/news/blog/2017/new-luminous-erasure-coding-rbd-cephfs/
[2] https://docs.ceph.com/en/latest/cephfs/createfs/

Zitat von Jerry Buburuz <jbuburuz@xxxxxxxxxxxxxxx>:

Hello,

Scenario 1.

Create 2 pools 1 data/1meta for cephfs using EC

cech fs new mycephfs data1 meta1
Error: "EC pool for default data pool discouraged"

Reading creating-pools I think understand you want for recovery
information for system in replicated pools. I am just not certain I
understand if ceph needs one default replicated pool for the whole
cluster?

Scenario 2.

Can I do this:

# create default pool
1. create 2 new pool data and meta pool, replicated.
2. create new fs for step 1.

# create EC pool for cephfs export
3. create 2 new pools data1 and meta1, erasure-coded
2. create new fs for step 3.

Hope this is clear? If this does work, does this mean a cluster has a
default pool or a default replicated pool for every new erasure-coded
pool?

thanks
jerry

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux