Re: WAL/DB size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Are you asking us to do 40GB  * 5 partitions on SSD just for block.db?

yes. By default ceph deploys block.db and wal.db on the same device if no separate wal device is specified.

Regards,
Eugen


Zitat von Muhammad Junaid <junaid.fsd.pk@xxxxxxxxx>:

Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm
SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G
WAL partition for each OSD on the SSD.

Are you asking us to do 40GB  * 5 partitions on SSD just for block.db?

On Fri, Sep 7, 2018 at 5:36 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:

On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid <junaid.fsd.pk@xxxxxxxxx>
wrote:
> Hi there
>
> Asking the questions as a newbie. May be asked a number of times before
by
> many but sorry, it is not clear yet to me.
>
> 1. The WAL device is just like journaling device used before bluestore.
And
> CEPH confirms Write to client after writing to it (Before actual write to
> primary device)?
>
> 2. If we have lets say 5 OSD's (4 TB SAS) and 1 200GB SSD. Should we
> partition SSD in 10 partitions? Shoud/Can we set WAL Partition Size
against
> each OSD as 10GB? Or what min/max we should set for WAL Partition? And
can
> we set remaining 150GB as (30GB * 5) for 5 db partitions for all OSD's?

A WAL partition would only help if you have a device faster than the
SSD where the block.db would go.

We recently updated our sizing recommendations for block.db at least
4% of the size of block (also referenced as the data device):


http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing

In your case, what you want is to create 5 logical volumes from your
200GB at 40GB each, without a need for a WAL device.


>
> Thanks in advance. Regards.
>
> Muhammad Junaid
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux