Re: Placing replaced disks to correct buckets.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David

Removal process/commands ran as follows:

#ceph osd crush reweight osd.<OSD NR> 0
#ceph osd out <OSD NR>
#systemctl stop ceph-osd@<OSD NR>
#umount /var/lib/ceph/osd/ceph-<OSD NR>

#ceph osd crush remove osd.<OSD NR>
#ceph auth del osd.<OSD NR>
#ceph osd rm <OSD NR>
#ceph-disk zap /dev/sd??

Adding them back on:

We skipped stage 1 and replaced the UUIDs of old disks with the new ones in the policy.cfg
We ran salt '*' pillar.items and confirmed that the output was correct. It showed the new UUIDs in the correct places.
Next we ran salt-run state.orch ceph.stage.3
PS: All of the above ran successfully.

The output of ceph osd tree showed that these new disks are currently in a ghost bucket, not even under root=default and without a weight.

The first step I then tried was to reweight them but found errors below:
Error ENOENT: device osd.<OSD NR> does not appear in the crush map
Error ENOENT: unable to set item id 39 name 'osd.39' weight 5.45599 at location {host=veeam-mk2-rack1-osd3,rack=veeam-mk2-rack1,room=veeam-mk2,root=veeam}: does not exist

But when I run the command: ceph osd find <OSD NR>
v-cph-admin:/testing # ceph osd find 39
{
"osd": 39,
"ip": "143.160.78.97:6870\/24436",
"crush_location": {}
}


Please let me know if there's any other info that you may need to assist

Regards
J.
>>> David Turner <drakonstein@xxxxxxxxx> 2019/02/18 17:08 >>>
Also what commands did you run to remove the failed HDDs and the commands you have so far run to add their replacements back in?

On Sat, Feb 16, 2019 at 9:55 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.

But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.

I have been using the commands below:

ceph osd crush set <osd.nr> <weight> <bucket>
ceph osd crush <osd nr> set <osd.nr> <weight> <bucket>

I do however find the OSD number when i run command:

ceph osd find <nr>

Your assistance/response to this will be highly appreciated.

Regards
John.

Please, paste your `ceph osd tree`, your version and what exactly error you get include osd number.
Less obfuscation is better in this, perhaps, simple case.

k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Vrywaringsklousule / Disclaimer: http://www.nwu.ac.za/it/gov-man/disclaimer.html

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux