Hi David Removal process/commands ran as follows: #ceph osd crush reweight osd.<OSD NR> 0 #ceph osd out <OSD NR> #systemctl stop ceph-osd@<OSD NR> #umount /var/lib/ceph/osd/ceph-<OSD NR> #ceph osd crush remove osd.<OSD NR> #ceph auth del osd.<OSD NR> #ceph osd rm <OSD NR> #ceph-disk zap /dev/sd?? Adding them back on: We skipped stage 1 and replaced the UUIDs of old disks with the new ones in the policy.cfg We ran salt '*' pillar.items and confirmed that the output was
correct. It showed the new UUIDs in the correct places. Next we ran salt-run state.orch ceph.stage.3 PS: All of the above ran successfully. The output of ceph osd tree showed that these new disks are currently in a ghost bucket, not even under root=default and without a weight. The first step I then tried was to reweight them but found errors below: Error ENOENT: device osd.<OSD NR> does not appear in the crush map Error ENOENT: unable to set item id 39 name 'osd.39' weight 5.45599 at
location
{host=veeam-mk2-rack1-osd3,rack=veeam-mk2-rack1,room=veeam-mk2,root=veeam}:
does not exist But when I run the command: ceph osd find <OSD NR> v-cph-admin:/testing # ceph osd find 39
{ "osd": 39, "ip": "143.160.78.97:6870\/24436", "crush_location": {} } Please let me know if there's any other info that you may need to assist Regards J. >>> David Turner <drakonstein@xxxxxxxxx> 2019/02/18 17:08 >>> Also what commands did you run to remove the failed HDDs and the commands you have so far run to add their replacements back in? On Sat, Feb 16, 2019 at 9:55 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
Vrywaringsklousule / Disclaimer: http://www.nwu.ac.za/it/gov-man/disclaimer.html |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com