Re: Crush Bucket move crashes mons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul

 

Many thanks for the reply.

 

The command is: crush move rack04  room=R80-Upper

 

Crush map is here: https://pastebin.com/CX7GKtBy

I’ve done some more testing, and the following all work:

·         Moving machines between the racks under the default root.

·         Renaming racks/hosts under the default root

·         Renaming the default root

·         Creating a new root

·         Adding rack05 and rack04 + hosts nina408 and nina508 into the new root

 

But when trying to move  anything into the default root it fails.

 

I have tried moving the following into default root:

·         Nina408 – with hosts in and without

·         Nina508 – with hosts in and without

·         Rack04

·         Rack05

·         Rack03 – which I created with nothing in it to try and move.

 

 

Since first email, I have got the cluster to HEALTH_OK with reweight mapping drives, so everything cluster wise appears to be functioning fine.

 

I have not tried manually editing the crush map and reimporting for the risk that it makes the cluster fall over, as this is currently in production. With the CLI I can at least cancel the command the monitor comes back up fine.

 

Many thanks.

 

Warren

 

 

From: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
Sent: 16 March 2018 13:54
To: Jeffs, Warren (STFC,RAL,ISIS) <warren.jeffs@xxxxxxxxxx>
Cc: ceph-users@xxxxxxxx
Subject: Re: [ceph-users] Crush Bucket move crashes mons

 

Hi,

the error looks like there might be something wrong with the device classes (which are managed via separate trees with magic names behind the scenes).


Can you post your crush map and the command that you are trying to run?

Paul

 

2018-03-15 16:27 GMT+01:00 <warren.jeffs@xxxxxxxxxx>:

Hi All,

 

Having some interesting challenges.

 

I am trying to move 2 new nodes + 2 new racks into my default root, I have added them to the cluster outside of the Root=default.

 

They are all in and up – happy it seems. The new nodes have all 12 OSDs in them and they are all ‘UP’

 

So when going to move them into the correctly room bucket under the default root they fail.

 

This is the error log at the time: https://pastebin.com/mHfkEp3X

 

I can create another host in the crush and move that in and out of rack buckets – all while being outside of the default root. Trying to move an empty Rack bucket into the default root fails too.

 

All of the cluster is on 12.2.4. I do have 2 backfill full osds which is the reason for needing these disks in the cluster asap.

 

Any thoughts?

 

Cheers

 

Warren Jeffs

 

ISIS Infrastructure Services

STFC Rutherford Appleton Laboratory

e-mail:  warren.jeffs@xxxxxxxxxx

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

--
Paul Emmerich

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux