Re: right way to recover a failed OSD (disk) when using BlueStore ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm pretty sure that the process is the same as with filestore. The cluster doesn't really know if an osd is filestore or bluestore... It's just an osd running a daemon.

If there are any differences, they would be in the release notes for Luminous as changes from Jewel.


On Sat, Sep 30, 2017, 6:28 PM Alejandro Comisario <alejandro@xxxxxxxxxxx> wrote:
Hi all.
Independetly that i've deployerd a ceph Luminous cluster with Bluestore using ceph-ansible (https://github.com/ceph/ceph-ansible) what is the right way to replace a disk when using Bluestore ?

I will try to forget everything i know on how to recover things with filestore and start fresh.

Any how-to's ? experiences ? i dont seem to find an official way of doing it.
best.

--
Alejandro Comisario
CTO | NUBELIU
E-mail: alejandro@xxxxxxxxxxxCell: +54 9 11 3770 1857
_
www.nubeliu.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux