Re: Where to place Block-DB?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As fresh formatted OSD. All data lost if NVMe dies...

On Thu, Apr 26, 2018 at 1:39 PM, Kevin Olbrich <ko@xxxxxxx> wrote:
>>>What happens im the NVMe dies?
>
>>You lost OSDs backed by that NVMe and need to re-add them to cluster.
>
> With data located on the OSD (recovery) or as fresh formatted OSD?
> Thank you.
>
> - Kevin
>
>
> 2018-04-26 12:36 GMT+02:00 Serkan Çoban <cobanserkan@xxxxxxxxx>:
>>
>> >On bluestore, is it safe to move both Block-DB and WAL to this journal
>> > NVMe?
>> Yes, just specify block-db with ceph-volume and wal also use that
>> partition. You can put 12-18 HDDs per NVMe
>>
>> >What happens im the NVMe dies?
>> You lost OSDs backed by that NVMe and need to re-add them to cluster.
>>
>> On Thu, Apr 26, 2018 at 12:58 PM, Kevin Olbrich <ko@xxxxxxx> wrote:
>> > Hi!
>> >
>> > On a small cluster I have an Intel P3700 as the journaling device for 4
>> > HDDs.
>> > While using filestore, I used it as journal.
>> >
>> > On bluestore, is it safe to move both Block-DB and WAL to this journal
>> > NVMe?
>> > Easy maintenance is first priority (on filestore we just had to flush
>> > and
>> > replace the SSD).
>> >
>> > What happens im the NVMe dies?
>> >
>> > Thank you.
>> >
>> > - Kevin
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux