Re: OSD node OS upgrade strategy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> I am wondering if its not necessary to have to drain/fill OSD nodes at
> all and if this can be done with just a fresh install and not touch
> the OSD's

Abosolutely.  I’ve done this both with Trusty -> Bionic and Precise -> RHEL7.  

> however I don't know how to perform a fresh installation and
> then tell ceph that I have OSD's with data on them and to somehow
> re-register them with the cluster?

Depends in part whether they’re ceph-volume native or ceph-disk.   

> Or is there a better order of
> operations to draining/filling without causing a high amount of
> objects to be misplaced due to manipulating the crush map.


Set noout for just the affected OSDs.   Ensure they’re all in a single failure domain.  Shut them down, unmount.  Repave the OS, carefully avoiding OSD drives, reactivate.   Rinse lather repeat.   Clear flags once they’re back up.  If you really need to repave OSDs, destroy them and backfill with throttling.  You’re using 3R I presume ?


> That being said, since our cluster is a bit older and the majority of
> our bluestore osd's are provisioned in the 'simple' method using a
> small metadata partition and the remainder as a raw partition where
> now it seems the suggested way is to use the lvm layout and tmpfs.

So they’re grandfathered ceph-disk.   Are they really using separate partitions or are WAL+DB just co-located?

What are you trying to accomplish here?  Are the existing OS installs “bad”? Eg.  / filesystem too small?  Or inconsistent in some way?   

Prima facie I don’t see - from what you’ve said - a significant benefit.  If your OSDs were substantially Filestore that would be slightly more motivation but not compelling.  




> 
> Anyways, I'm all ears and appreciate any feedback.
> 
> Jared Baker
> Ontario Institute for Cancer Research
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux