Re: Bluestore disk colocation using NVRAM, SSD and SATA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



But for example, on the same server i have 3 disks technologies to deploy pools, SSD, SAS and SATA.
The NVME were bought just thinking on the journal for SATA and SAS, since journals for SSD were colocated.

But now, exactly the same scenario, should i trust the NVME for the SSD pool ? are there that much of a  gain ? against colocating block.* on the same SSD? 

best.

On Wed, Sep 20, 2017 at 6:36 PM, Nigel Williams <nigel.williams@xxxxxxxxxxx> wrote:
On 21 September 2017 at 04:53, Maximiliano Venesio <massimo@xxxxxxxxxxx> wrote:
Hi guys i'm reading different documents about bluestore, and it never recommends to use NVRAM to store the bluefs db, nevertheless the official documentation says that, is better to use the faster device to put the block.db in.

​Likely not mentioned since no one yet has had the opportunity to test it.​

So how do i have to deploy using bluestore, regarding where i should put block.wal and block.db ? 

​block.* would be best on your NVRAM device, like this:

​ceph-deploy osd create --bluestore c0osd-136:/dev/sda --block-wal /dev/nvme0n1 --block-db /dev/nvme0n1



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Alejandro Comisario
CTO | NUBELIU
E-mail: alejandro@xxxxxxxxxxxCell: +54 9 11 3770 1857
_
www.nubeliu.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux