Re: Bluestore disk colocation using NVRAM, SSD and SATA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe you should use only one big partition in case of one device per OSD.

And in case of using additional device(s) for wal\db block.db size is set to 1% of the main partition by default (at least according to ceph-disk sources; it just gets bluestore_block_size, divides it by 100 and use it as a bluestore_block_db_size if it is not set in config).

Actual size will vary depending on workload\number of objects. Haven't found any info on how to check current db size yet, but didn't dig into that much.

Best regards,
Vladimir

2017-09-21 13:01 GMT+05:00 Dietmar Rieder <dietmar.rieder@xxxxxxxxxxx>:
Hi,

I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same
questions to myself.
For now I decided to use the NVMEs as wal and db devices for the SAS
HDDs and on the SSDs I colocate wal and  db.

However, I'm still wonderin how (to what size) and if I should change
the default sizes of wal and db.

Dietmar

On 09/21/2017 01:18 AM, Alejandro Comisario wrote:
> But for example, on the same server i have 3 disks technologies to
> deploy pools, SSD, SAS and SATA.
> The NVME were bought just thinking on the journal for SATA and SAS,
> since journals for SSD were colocated.
>
> But now, exactly the same scenario, should i trust the NVME for the SSD
> pool ? are there that much of a  gain ? against colocating block.* on
> the same SSD? 
>
> best.
>
> On Wed, Sep 20, 2017 at 6:36 PM, Nigel Williams
> <nigel.williams@xxxxxxxxxxx <mailto:nigel.williams@tpac.org.au>> wrote:
>
>     On 21 September 2017 at 04:53, Maximiliano Venesio
>     <massimo@xxxxxxxxxxx <mailto:massimo@xxxxxxxxxxx>> wrote:
>
>         Hi guys i'm reading different documents about bluestore, and it
>         never recommends to use NVRAM to store the bluefs db,
>         nevertheless the official documentation says that, is better to
>         use the faster device to put the block.db in.
>
>
>     ​Likely not mentioned since no one yet has had the opportunity to
>     test it.​
>
>         So how do i have to deploy using bluestore, regarding where i
>         should put block.wal and block.db ? 
>
>
>     ​block.* would be best on your NVRAM device, like this:
>
>     ​ceph-deploy osd create --bluestore c0osd-136:/dev/sda --block-wal
>     /dev/nvme0n1 --block-db /dev/nvme0n1
>
>
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxx.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
>
>
> --
> *Alejandro Comisario*
> *CTO | NUBELIU*
> E-mail: alejandro@xxxxxxxxxxx <mailto:alejandro@xxxxxxxxxxx>Cell: +54 9
> 11 3770 1857
> _
> www.nubeliu.com <http://www.nubeliu.com/>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
_________________________________________
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics
Innrain 80, 6020 Innsbruck
Phone: +43 512 9003 71402
Fax: +43 512 9003 73100
Email: dietmar.rieder@xxxxxxxxxxx
Web:   http://www.icbi.at



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux