Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/21/2021 10:44 AM, Dave Piper wrote:
I still can't find a way to get ceph-bluestore-tool working in my containerized deployment. As soon as the OSD daemon stops, the contents of /var/lib/ceph/osd/ceph-<N> are unreachable.

Some speculations on the above. /var/lib/ceph/osd/ceph-<N> is just a mirror of some subfolder at container's host, perhaps something under /var/lib/ceph/<cluster-fsid>. So you might want to refer to this subfolder directly or make a symlink to it.

Or learn how to run the command from within a container deployed similarly to ceph-osd one with all the proper mappings. Just speculating...


I've found this blog post that suggests changes to the container's entrypoint are required, but the proposed fix didn't work for me. https://blog.cephtips.com/perform-osd-maintenance-in-a-container/  The container stays alive, but the OSD process within it has still died, which seems to be enough to mask / unmount the files so that the ceph-<N> folder appears to be empty to all other processes.

The associated pull request in the ceph-container project suggests setting ` -e DEBUG=stayalive` when running the container as an alternative but I see the same behaviour when trying this: an empty folder as soon as the OSD process crashed. https://github.com/ceph/ceph-container/pull/1605


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux