Re: [EXTERNAL] Re: Renaming a ceph node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, so I used the rename-bucket option this morning for OSD node renames, and it was a success.  Works great even on Luminous.

I looked at the swap-bucket command and I felt it was leaning toward real data migration from old OSDs to new OSDs and I was a bit timid because there wasn’t a second host, just a name change.  So when I looked at rename-bucket, it just seemed too simple not to try first.  And I did, and it was.  I renamed two host buckets (they housed discrete storage classes, so no dangerous loss of data redundancy), and even some rack buckets.

sudo ceph osd crush rename-bucket <oldname> <newname>

and no data moved.  I first thought I’d wait til the hosts were shutdown, but after I stopped the OSDs on the nodes, it seemed safe enough, and it was.

In my particular case, I was moving migrating nodes to a new datacenter, just  new names and IPs.  I also moved a mon/mgr/rgw; and I merely had to delete the mon first, then reprovision it later.

The rgw and mgr worked fine.  I pre-edited ceph.conf to add the new networks, remove the old mon name, add the new mon name, so on startup it worked.

I’m not a ceph admin but I play one on the tele.

From: Eugen Block <eblock@xxxxxx>
Date: Wednesday, February 15, 2023 at 12:44 AM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: [EXTERNAL]  Re: Renaming a ceph node
Hi,

I haven't done this in a production cluster yet, only in small test
clusters without data. But there's a rename-bucket command:

ceph osd crush rename-bucket <srcname> <dstname>
                              rename bucket <srcname> to <dstname>

It should do exactly that, just rename the bucket within the crushmap
without changing the ID. That command also exists in Luminous, I
believe. To have an impression of the impact I'd recommend to test in
a test cluster first.

Regards,
Eugen


Zitat von Manuel Lausch <manuel.lausch@xxxxxxxx>:

> Hi,
>
> yes you can rename a node without massive rebalancing.
>
> The following I tested with pacific. But I think this should work with
> older versions as well.
> You need to rename the node in the crushmap between shutting down the
> node with the old name and starting it with the new name.
> You only must keep the ID from the node in the crushmap!
>
> Regards
> Manuel
>
>
> On Mon, 13 Feb 2023 22:22:35 +0000
> "Rice, Christian" <crice@xxxxxxxxxxx> wrote:
>
>> Can anyone please point me at a doc that explains the most
>> efficient procedure to rename a ceph node WITHOUT causing a massive
>> misplaced objects churn?
>>
>> When my node came up with a new name, it properly joined the
>> cluster and owned the OSDs, but the original node with no devices
>> remained.  I expect this affected the crush map such that a large
>> qty of objects got reshuffled.  I want no object movement, if
>> possible.
>>
>> BTW this old cluster is on luminous. ☹
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux