Re: Proper procedure to replace DB/WAL SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi *,

sorry for bringing up that old topic again, but we just faced a corresponding situation and have successfully tested two migration scenarios.

Zitat von ceph-users-request@xxxxxxxxxxxxxx:
Date: Sat, 24 Feb 2018 06:10:16 +0000
From: David Turner <drakonstein@xxxxxxxxx>
To: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
Cc: Caspar Smit <casparsmit@xxxxxxxxxxx>, ceph-users
	<ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  Proper procedure to replace DB/WAL SSD
Message-ID:
	<CAN-GepJZd8RGxCHbXnSF7hWu22rK2dNfBdutuy2yGkLMMYiRaw@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

Caspar, it looks like your idea should work. Worst case scenario seems like
the osd wouldn't start, you'd put the old SSD back in and go back to the
idea to weight them to 0, backfilling, then recreate the osds. Definitely
with a try in my opinion, and I'd love to hear your experience after.

Nico, it is not possible to change the WAL or DB size, location, etc after
osd creation.

it is possible to move a separate WAL/DB to a new device, whilst without changing the size. We have done this for multiple OSDs, using only existing (mainstream :) ) tools and have documented the procedure in http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/ . It will *not* allow to separate WAL / DB after OSD creation, nor does it allow changing the DB size.

As we faced a failing WAL/DB SSD during one of the moves (fatal read errors from the DB block device), we also established a procedure to initialize the OSD to "empty" during that operation, so that the OSD will get re-filled without changing the OSD map: http://heiterbiswolkig.blogs.nde.ag/2018/04/08/resetting-an-existing-bluestore-osd/

HTH
Jens

PS: Live WAL/DB migration is something that can be done easily when using logical volumes, which is why I'd highly recommend to go that route, instead of using partitions. LVM not only helps when the SSDs reach their EOL, but with live changes to load balancing (WAL/DB LVs distributing across multiple SSDs), too.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux