On 08/02/2019 20.54, Ashley Merrick wrote: > Yes that is all fine, the other 3 OSD's on the node work fine as expected, > > When I did the orginal OSD via ceph-deploy i used the external hostname > at the end of the command instead of the internal hostname, I then > deleted the OSD and zap'd the disk and re-added using the internal > hostname + the other 3 OSD's. > > The other 3 are using the internal IP fine, the first OSD is not. > > The config and everything else is fine as I can reboot any of the other > 3 OSD's and they work fine, just somewhere the osd.22 is still storing > the orginal hostname/ip it was given via ceph-deploy even after a rm / > disk zap The OSDMap stores the OSD IP, though I *think* it's supposed to update itself when the OSD's IP changes. If this is a new OSD and you don't care about the data (or can just let it rebalance away), just follow the instructions for "add/remove OSDs" here: http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/ Make sure when the OSD is gone it really is gone (nothing in 'ceph osd tree' or 'ceph osd ls'), e.g. 'ceph osd purge <id> --yes-i-really-mean-it' and make sure there isn't a spurious entry for it in ceph.conf, then re-deploy. Once you do that there is no possible other place for the OSD to somehow remember its old IP. -- Hector Martin (hector@xxxxxxxxxxxxxx) Public Key: https://mrcn.st/pub _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com