Im pretty sure I had the same issue without the cache-tier. Could it be important that this cluster is still using filestore osds? On Wed, Mar 7, 2018 at 9:04 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: > On Tue, Mar 6, 2018 at 6:43 AM, Wyllys Ingersoll > <wyllys.ingersoll@xxxxxxxxxxxxxx> wrote: >> Under Luminous 12.2.4 (with filestore OSDs), listing images on an >> erasure-coded pool results in the command hanging and never returning. >> >> Ex: >> $ ceph osd pool create ectest 32 32 erasure default >> $ ceph osd pool create ectest-cache 32 32 >> $ ceph osd tier add ectest ectest-cache >> $ ceph osd tier cache-mode ectest-cache writeback >> $ ceph osd tier set-overlay ectest ectest-cache >> $ ceph osd pool application enable ectest rbd >> $ rbd -p ectest ls >> ... (never returns )... >> >> $ rbd create ectest/foobar --size 5G >> ... (never returns) .... >> >> The monitor logs do not show any errors while these operations are hanging >> >> This same set of steps worked under Jewel. Is there something new in >> Luminous that needs to be done? Even if I am doing this wrong, the >> command should return an error at some point and not just hang >> indefinitely. > > This is weird but I wonder if the rbd tooling changed because it now > supports EC pools natively, rather than requiring use of a cache pool. > > Or rather, I'm sure it did change; I wonder if it managed to break the > use of cache pools by doing so. I'd investigate that direction. And > see if you're better off now just using them directly without having > to tune the cache size! :) > -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html