I rebuilt my cluster and now it appears to be working correctly. Sorry for the false alarm. On Thu, Mar 8, 2018 at 1:43 PM, Mykola Golub <to.my.trociny@xxxxxxxxx> wrote: > On Tue, Mar 06, 2018 at 09:43:36AM -0500, Wyllys Ingersoll wrote: >> Under Luminous 12.2.4 (with filestore OSDs), listing images on an >> erasure-coded pool results in the command hanging and never returning. >> >> Ex: >> $ ceph osd pool create ectest 32 32 erasure default >> $ ceph osd pool create ectest-cache 32 32 >> $ ceph osd tier add ectest ectest-cache >> $ ceph osd tier cache-mode ectest-cache writeback >> $ ceph osd tier set-overlay ectest ectest-cache >> $ ceph osd pool application enable ectest rbd >> $ rbd -p ectest ls >> ... (never returns )... >> >> $ rbd create ectest/foobar --size 5G >> ... (never returns) .... > > Are you sure your ec pool is in OK state? Could you please show `ceph -s` > (and `ceph health detail` if it is not OK)? > > If the health is OK, what does happen if you try `rados -p ectest ls`? > If it hangs too, showing here the result of > `rados --debug-rados=30 -p ectest ls` might be useful. > > -- > Mykola Golub -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html