Hi all, I am testing the tiering functionality with cephfs. I used a replicated cache with an EC data pool, and a replicated metadata pool like this: ceph osd pool create cache 1024 1024 ceph osd pool set cache size 2 ceph osd pool set cache min_size 1 ceph osd erasure-code-profile set profile11 k=8 m=3 ruleset-failure-domain=osd ceph osd pool create ecdata 128 128 erasure profile11 ceph osd tier add ecdata cache ceph osd tier cache-mode cache writeback ceph osd tier set-overlay ecdata cache ceph osd pool set cache hit_set_type bloom ceph osd pool set cache hit_set_count 1 ceph osd pool set cache hit_set_period 3600 ceph osd pool set cache target_max_bytes $((280*1024*1024*1024)) ceph osd pool create metadata 128 128 ceph osd pool set metadata crush_ruleset 1 # SSD root in crushmap ceph fs new ceph_fs metadata cache <-- wrong ? I started testing with this, and this worked, I could write to it with cephfs and the cache was flushing to the ecdata pool as expected. But now I notice I made the fs right upon the cache, instead of the underlying data pool. I suppose I should have done this: ceph fs new ceph_fs metadata ecdata So my question is: Was this wrong and not doing the things I thought it did, or was this somehow handled by ceph and didn't it matter I specified the cache instead of the data pool? Thank you! Kenneth