Re: Separating Mons and OSDs in Ceph Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Which Ceph release are you running, and how was it deployed?

With some older releases I experienced mons behaving unexpectedly when one of the quorum bounced, so I like to segregate them for isolation still.  

There was also at point an issue where clients wouldn’t get a runtime update of new mons.   

I endorse Eugen’s strategy, but must ask first the server and client releases involved.  Especially since you wrote “old”.  

> On Sep 9, 2023, at 5:28 AM, Eugen Block <eblock@xxxxxx> wrote:
> 
> Hi,
> 
> is it an actual requirement to redeploy MONs? Because almost all clusters we support or assist with have MONs and OSDs colocated. MON daemons are quite light-weight services, so if it's not really necessary, I'd leave it as it is.
> If you really need to move the MONs to different servers, I'd recommend to add the new MONs one by one. Your monmap will then contain old and new MONs, and when all new MONs (with new IPs) are up and running you can remove the old MON daemons. There's no need to switch off OSDs or drain a host. You can find more information in the Nautilus docs [1] where the orchestrator wasn't available yet.
> 
> Regards,
> Eugen
> 
> [1] https://docs.ceph.com/en/nautilus/rados/operations/add-or-rm-mons/
> 
> Zitat von Ramin Najjarbashi <ramin.najarbashi@xxxxxxxxx>:
> 
>> Hi
>> 
>> I
>> am writing to seek guidance and best practices for a maintenance operation
>> in my Ceph cluster. I have an older cluster in which the Monitors (Mons)
>> and Object Storage Devices (OSDs) are currently deployed on the same host.
>> I am interested in separating them while ensuring zero downtime and
>> minimizing risks to the cluster's stability.
>> 
>> The primary goal is to deploy new Monitors on different servers without
>> causing service interruptions or disruptions to data availability.
>> 
>> The challenge arises because updating the configuration to add new Monitors
>> typically requires a restart of all OSDs, which is less than ideal in terms
>> of maintaining cluster availability.
>> 
>> One approach I considered is to reweight all OSDs on the host to zero,
>> allowing data to gradually transfer to other OSDs. Once all data has been
>> safely migrated, I would proceed to remove the old OSDs. Afterward, I would
>> deploy the new Monitors on a different server with the previous IP
>> addresses and deploy the OSDs on the old Monitors' host with new IP
>> addresses.
>> 
>> While this approach seems to minimize risks, it can be time-consuming and
>> may not be the most efficient way to achieve the desired separation.
>> 
>> I would greatly appreciate the community's insights and suggestions on the
>> best approach to achieve this separation of Mons and OSDs with zero
>> downtime and minimal risk. If there are alternative methods or best
>> practices that can be recommended, please share your expertise.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux