Just as an additional option, you could also set the initial OSD crush
weight to 0 in ceph.conf:
osd_crush_initial_weight = 0
This is how we add new hosts/OSDs to the cluster to prevent
backfilling before all hosts/OSDs are in. When everything is in place
we change the crush weight of the new OSDs and let the backfilling
begin.
Regards,
Eugen
Zitat von Reed Dier <reed.dier@xxxxxxxxxxx>:
Just to piggyback on this, the below are the correct answers.
However, how I do it, which is admittedly not the best way, but it
is the easy way.
I set the norecover, nobackfill flags
I run my osd creation script against the first disk on the new host
to make sure that everything is working correctly, and also so that
I can then manually move my new host bucket where I need it in the
crush map with
ceph osd crush move {bucket-name} {bucket-type}={bucket-name}
Then I proceed with my script for the rest of the OSDs on that host
and know that they will fall into the correct crush location.
And then of course I unset the norecover, nobackfill flags so that
data starts moving.
I only mention this because it ensures that you don't fat finger the
hostname on manual bucket creation, or the hostname syntax doesn't
match as expected, and it allows you to course correct after a
single OSD added, rather than all N OSDs.
Hope thats also helpful.
Reed
On Dec 2, 2020, at 4:38 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
Hi Francois!
If I've understood your question, I think you have two options.
1. You should be able to create an empty host then move it into a room
before creating any osd:
ceph osd crush add-bucket <hostname> host
ceph osd crush mv <hostname> room=<the room>
2. Add a custom crush location to ceph.conf on the new server so that
its osds are placed in the correct room/rack/host when they are first
created, e.g.
[osd]
crush location = room=0513-S-0034 rack=SJ04 host=cephdata20b-b7e4a773b6
Does that help?
Cheers, Dan
On Wed, Dec 2, 2020 at 11:29 PM Francois Legrand
<fleg@xxxxxxxxxxxxxx> wrote:
Hello,
I have a ceph nautilus cluster. The crushmap is organized with 2 rooms,
servers in these rooms and osd in these servers, I have a crush rule to
replicate data over the servers in different rooms.
Now, I want to add a new server in one of the rooms. My point is that I
would like to specify the room of this new server BEFORE creating osd in
this server (so the data added to the osd will be directly at the right
location). My problem is that it seems that servers appears in the
crushmap only when they have osds... and when you create a first osd,
the server is inserted in the crushmap under the default bucket (so not
in a room and then the first data stored in this osd will not be at the
correct location). I could move it after (if I do it rapidly, there will
be no that much data to move after), but I was wondering if there is a
way to either define the position of a server in the crushmap hierarchy
before creating osd or eventually to specify the room when creating the
first osd ?
F.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx