Hi Ilya, thanks for your answer - really helpful! We were so desparate today due to this bug that we downgraded to -23. But it's very good to know that -31 doesnt contain this bug and we could safely update back to this release. If a new version (say -33 is released): How/Where can I find out if it contains the fix? Except of trying and having it crash, of course - which I'm obviously very reluctant to do... And, one more question if I may: Would that problem also show up on Eoan (Ubuntu 19.10) which has been released a few days ago if we used the most recent Kernel there? I think it's 5.3.0-something if I'm not mistaken... Thank you / BR Ranjan Am 21.10.19 um 17:43 schrieb Ilya Dryomov: > On Mon, Oct 21, 2019 at 5:09 PM Ranjan Ghosh <ghosh@xxxxxx> wrote: >> Hi all, >> >> it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel >> 5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes >> even after a few minutes. I found this bug here on CoreOS: >> >> https://github.com/coreos/bugs/issues/2616 >> >> Which is exactly also the error message I get ("cache_from_obj: Wrong >> slab cache. inode_cache but object is from ceph_inode_info") and the >> problem seems fairly recent. >> >> The problem vanished immediately after downgrading the kernel again. > Hi Ranjan, > > Which kernel did you downgrade to? > > Looking at the changelog, disco's 5.0.0-32 kernel has the botched > backport mentioned in the coreos issue you linked to. Downgrading to > 5.0.0-31 should help, until disco picks up the fix (either just the > revert or the revert plus the new backport). > > Revert "ceph: use ceph_evict_inode to cleanup inode's resource" > ceph: use ceph_evict_inode to cleanup inode's resource > > Thanks, > > Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx