On Thu, Jan 23, 2020 at 3:31 PM Hayashida, Mami <mami.hayashida@xxxxxxx> wrote: > > Thanks, Ilya. > > First, I was not sure whether to post my question on @ceph.io or @lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future? Yes. I got the following when I replied to your previous email: As you may or may not be aware, most of the Ceph mailing lists have migrated to a new self-hosted instance of Mailman 3. For the past few months, both this list and ceph-users@xxxxxxx have been enabled. As of January 22, 2020, mail sent to ceph-users@xxxxxxxxxxxxxx will no longer be delivered. This domain will remain so that permalinks to archives are preserved. Please use the new address ceph-users@xxxxxxx instead. > > Second, thanks for your advice on cache-tiering -- I was starting to feel that way but always good to know what Ceph "experts" would say. > > Third, I tried enabling (and setting) the pool application commands you outlined but got errors (Ceph is not allowing me to enable/set application on the cache tier) > > $ ceph osd pool application enable cephfs-data-cache cephfs > Error EINVAL: application must be enabled on base tier > $ ceph osd pool application set cephfs-data-cache cephfs data cephfs_test > Error EINVAL: application metadata must be set on base tier > > Since at this point, it is highly unlikely that we will be utilizing cache-tier on our production clusters, and there is a work around it (by manually creating a CephFS client key), this is nothing serious or urgent; but I thought I should let you guys know. I haven't actually tried it. There is probably a way to make it work (recreating the pools and doing the application stuff before tiering or something along those lines), but yeah, cache tiering is not without sharp edges... Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx