Re: right way to recover a failed OSD (disk) when using BlueStore ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David, thanks.
I've switched the brnach to Luminous and the doc is the same (thankfully).

No worries, i'll wait till someone that hopefully did it already might give me a hint.
thanks!

On Wed, Oct 11, 2017 at 11:00 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
Careful when you're looking at documentation.  You're looking at the master branch which might have unreleased features or changes that your release doesn't have.  You'll want to change master in the url to luminous to make sure that you're looking at the documentation for your version of Ceph.

I haven't personally used bluestore yet so I can't say what the proper commands are there without just looking online for the answer.  I do know that there is no reason to have your DB and WAL devices on separate partitions if they're on the same device.  What's been mentioned on the ML is that you want to create a partition for the DB and the WAL will use it.  A partition for the WAL is only if it is planned to be on a different device than the DB.

On Tue, Oct 10, 2017 at 5:59 PM Alejandro Comisario <alejandro@xxxxxxxxxxx> wrote:
Hi, i see some notes there that did'nt existed on jewel :


In my case what im using right now on that OSD is this :

root@ndc-cl-osd4:~# ls -lsah /var/lib/ceph/osd/ceph-104
total 64K
   0 drwxr-xr-x  2 ceph ceph  310 Sep 21 10:56 .
4.0K drwxr-xr-x 25 ceph ceph 4.0K Sep 21 10:56 ..
   0 lrwxrwxrwx  1 ceph ceph   58 Sep 21 10:30 block -> /dev/disk/by-partuuid/0ffa3ed7-169f-485c-9170-648ce656e9b1
   0 lrwxrwxrwx  1 ceph ceph   58 Sep 21 10:30 block.db -> /dev/disk/by-partuuid/5873e2cb-3c26-4a7d-8ff1-1bc3e2d62e5a
   0 lrwxrwxrwx  1 ceph ceph   58 Sep 21 10:30 block.wal -> /dev/disk/by-partuuid/aed9e5e4-c798-46b5-8243-e462e74f6485

block.db and block.wal are on two different NVME partitions, witch are nvme1n1p17 and nvme1n1p18 so assuming after hot swaping the device, the drive letter is "sdx" according to the link above what would be the right command to re-use the two NVME partitions for block db and wal ?

I presume that everything else is the same.
best.


On Sat, Sep 30, 2017 at 9:00 PM, David Turner <drakonstein@xxxxxxxxx> wrote:

I'm pretty sure that the process is the same as with filestore. The cluster doesn't really know if an osd is filestore or bluestore... It's just an osd running a daemon.

If there are any differences, they would be in the release notes for Luminous as changes from Jewel.


On Sat, Sep 30, 2017, 6:28 PM Alejandro Comisario <alejandro@xxxxxxxxxxx> wrote:
Hi all.
Independetly that i've deployerd a ceph Luminous cluster with Bluestore using ceph-ansible (https://github.com/ceph/ceph-ansible) what is the right way to replace a disk when using Bluestore ?

I will try to forget everything i know on how to recover things with filestore and start fresh.

Any how-to's ? experiences ? i dont seem to find an official way of doing it.
best.

--
Alejandro Comisario
CTO | NUBELIU
E-mail: alejandro@xxxxxxxxxxxCell: +54 9 11 3770 1857
_
www.nubeliu.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Alejandro Comisario
CTO | NUBELIU
E-mail: alejandro@xxxxxxxxxxxCell: +54911 3770 1857
_



--
Alejandro Comisario
CTO | NUBELIU
E-mail: alejandro@xxxxxxxxxxxCell: +54911 3770 1857
_
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux