I know that it is possible to run CephFS with a cache tier on the data pool in Giant, because that's what I do. However when I configured it, I was on the previous release. When I upgraded to Giant, everything just kept working. By the way when I set it up, I used the following commmands: ceph osd pool create cephfs-data 192 192 erasure ceph osd pool create cephfs-metadata 192 192 replicated ssd ceph osd pool create cephfs-data-cache 192 192 replicated ssd ceph osd pool set cephfs-data-cache crush_ruleset 1 ceph osd pool set cephfs-metadata crush_ruleset 1 ceph osd tier add cephfs-data cephfs-data-cache ceph osd tier cache-mode cephfs-data-cache writeback ceph osd tier set-overlay cephfs-data cephfs-data-cache ceph osd dump ceph mds newfs 5 6 --yes-i-really-mean-it So actually I didn't add a cache tier to an existing CephFS, but first made the pools and added CephFS directly after. In my case, the "ssd" pool is ssd-backed (obviously), while the default pool is on rotating media; the crush_ruleset 1 is meant to place both the cache pool and the metadata pool on the ssd's. Erik. On 11/16/2014 08:01 PM, Scott Laird wrote: > Is it possible to add a cache tier to cephfs's data pool in giant? > > I'm getting a error: > > $ ceph osd tier set-overlay data data-cache > > Error EBUSY: pool 'data' is in use by CephFS via its tier > > > From what I can see in the code, that comes from > OSDMonitor::_check_remove_tier; I don't understand why set-overlay needs > to call _check_remove_tier. A quick look makes it look like set-overlay > will always fail once MDS has been set up. Is this a bug, or am I doing > something wrong? > > > Scott > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com