Re: replace OSD without PG remapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Frank, "degradation is exactly what needs to be
avoided/fixed at all cost", clear and loud, point is taken!
I didn't actually quite get it last time. I used to think
degradation would be OK, but now, I agree with you, that is
not OK at all for production storage.
Appreciate your patience!

Tony
> -----Original Message-----
> From: Frank Schilder <frans@xxxxxx>
> Sent: Tuesday, February 2, 2021 11:47 PM
> To: Tony Liu <tonyliu0592@xxxxxxxxxxx>; ceph-users@xxxxxxx
> Subject: Re: replace OSD without PG remapping
> 
> You asked about exactly this before:
> https://lists.ceph.io/hyperkitty/list/ceph-
> users@xxxxxxx/thread/IGYCYJTAMBDDOD2AQUCJQ6VSUWIO4ELW/#ZJU3555Z5WQTJDPCT
> MPZ6LOFTIUKKQUS
> 
> It is not possible to avoid remapping, because if the PGs are not
> remapped you would have degraded redundancy. In any storage system, this
> degradation is exactly what needs to be avoided/fixed at all cost.
> 
> I don't see an issue with health status messages issued by self-healing.
> That's the whole point of ceph, just let it do its job and don't get
> freaked out by health_warn.
> 
> You can, however try to keep the window of rebalancing short and this is
> exactly what was discussed in the thread above already. As is pointed
> out there as well, even this is close to pointless. Just deploy a few
> more disks than you need, let the broken ones go and be happy that ceph
> is taking care of the rest and even tells you about its progress.
> 
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> ________________________________________
> From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> Sent: 03 February 2021 03:10:26
> To: ceph-users@xxxxxxx
> Subject:  replace OSD without PG remapping
> 
> Hi,
> 
> There are multiple different procedures to replace an OSD.
> What I want is to replace an OSD without PG remapping.
> 
> #1
> I tried "orch osd rm --replace", which sets OSD reweight 0 and status
> "destroyed". "orch osd rm status" shows "draining".
> All PGs on this OSD are remapped. Checked "pg dump", can't find this OSD
> any more.
> 
> 1) Given [1], setting weight 0 seems better than setting reweight 0.
> Is that right? If yes, should we change the behavior of "orch osd rm --
> replace"?
> 
> 2) "ceph status" doesn't show anything about OSD draining.
> Is there any way to see the progress of draining?
> Is there actually copy happening? The PG on this OSD is remapped and
> copied to another OSD, right?
> 
> 3) When OSD is replaced, there will be remapping and backfilling.
> 
> 4) There is remapping in #2 and remapping again in #3.
> I want to avoid it.
> 
> #2
> Is there any procedure that doesn't mark OSD out (set reweight 0),
> neither set weight 0, which should keep PG map unchanged, but just warn
> about less redundancy (one out of 3 OSDs of PG is down), and when OSD is
> replaced, no remapping, just data backfilling?
> 
> [1] https://ceph.com/geen-categorie/difference-between-ceph-osd-
> reweight-and-ceph-osd-crush-reweight/
> 
> 
> Thanks!
> Tony
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux