Hi Dominique, How do I check using cephadm shell ? I am new to cephadm :) https://paste.opendev.org/show/b4egkEdAkCWSkT3VRyO9/ On Fri, Sep 30, 2022 at 6:20 AM Dominique Ramaekers < dominique.ramaekers@xxxxxxxxxx> wrote: > > Ceph.conf isn't available on that node/container. > > Wat happens if you try to start a cephadm shell on that node? > > > > -----Oorspronkelijk bericht----- > > Van: Satish Patel <satish.txt@xxxxxxxxx> > > Verzonden: donderdag 29 september 2022 21:45 > > Aan: ceph-users <ceph-users@xxxxxxx> > > Onderwerp: Re: strange osd error during add disk > > > > Bump! Any suggestions? > > > > On Wed, Sep 28, 2022 at 4:26 PM Satish Patel <satish.txt@xxxxxxxxx> > wrote: > > > > > Folks, > > > > > > I have 15 nodes for ceph and each node has a 160TB disk attached. I am > > > using cephadm quincy release and all 14 nodes have been added except > > > one node which is giving a very strange error during adding it. I have > > > put all logs here > > https://paste.opendev.org/show/bbSKwlSLyANMbrlhwzXL/ > > > > > > In short, the following error logs I am getting. I have tried zap to > > > disk and re-add but getting the following error every single time. > > > > > > [2022-09-28 20:13:28,644][ceph_volume.main][INFO ] Running command: > > > ceph-volume lvm list --format json > > > [2022-09-28 20:13:28,644][ceph_volume.main][ERROR ] ignoring inability > > > to load ceph.conf Traceback (most recent call last): > > > File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line > 145, > > in main > > > conf.ceph = configuration.load(conf.path) > > > File "/usr/lib/python3.6/site-packages/ceph_volume/configuration.py", > > line 51, in load > > > raise exceptions.ConfigurationError(abspath=abspath) > > > > > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email > > to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx