OSD node OS upgrade strategy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have a 39 node, 1404 spinning disk Ceph Mimic cluster across 6 racks
for a total of 9.1PiB raw and about 40% utilized. These storage nodes
started their life on Ubuntu 14.04 and in-place upgraded to 16.04 2
years ago however I have started a project to do fresh installs of
each OSD node to Ubuntu 18.04 to keep things fresh and well supported.
I am reaching out to see what others might suggest in terms of
strategy to get these hosts updated quicker than my current strategy.

Current strategy:
1. Pick 3 nodes, drain them by modifying the crush weight
2. Fresh install 18.04 using automation tool (MAAS) + some Ansible
playbooks to setup server
3. Purge OSD node worth of OSD' (this causes data to be 'misplaced'
due to rack weight changing)
4. Run ceph-volume lvm batch for osd node
5. Move OSD's in to desired hosts in crush map (large rebalancing to
fill back up)

If anyone has suggestions on a quicker way to do this I am all ears.

I am wondering if its not necessary to have to drain/fill OSD nodes at
all and if this can be done with just a fresh install and not touch
the OSD's however I don't know how to perform a fresh installation and
then tell ceph that I have OSD's with data on them and to somehow
re-register them with the cluster? Or is there a better order of
operations to draining/filling without causing a high amount of
objects to be misplaced due to manipulating the crush map.

That being said, since our cluster is a bit older and the majority of
our bluestore osd's are provisioned in the 'simple' method using a
small metadata partition and the remainder as a raw partition whereas
now it seems the suggested way is to use the lvm layout and tmpfs.

Anyways, I'm all ears and appreciate any feedback.

Jared Baker
Ontario Institute for Cancer Research
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux