Re: Bluestore disk colocation using NVRAM, SSD and SATA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is there any guidance on the sizes for the WAL and DB devices when they are separated to an SSD/NVMe?  I understand that probably there isn't a one size fits all number, but perhaps something as a function of cluster/usage parameters like OSD size and usage pattern (amount of writes, number/size of objects, etc.)?
Also, once numbers are chosen and the OSD is in use, is there a way to tell what portion of these spaces are used?

Thanks,

Andras


On 09/20/2017 05:36 PM, Nigel Williams wrote:
On 21 September 2017 at 04:53, Maximiliano Venesio <massimo@xxxxxxxxxxx> wrote:
Hi guys i'm reading different documents about bluestore, and it never recommends to use NVRAM to store the bluefs db, nevertheless the official documentation says that, is better to use the faster device to put the block.db in.

​Likely not mentioned since no one yet has had the opportunity to test it.​

So how do i have to deploy using bluestore, regarding where i should put block.wal and block.db ? 

​block.* would be best on your NVRAM device, like this:

​ceph-deploy osd create --bluestore c0osd-136:/dev/sda --block-wal /dev/nvme0n1 --block-db /dev/nvme0n1




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux