Re: BlueStore wal vs. db size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




 The workload is relatively high read/write of objects through radosgw.  Gbps+ in both directions.  The OSDs are spinning disks, the journals (up until now filestore) are on SSDs.  Four OSDs / journal disk.

On Wed, Aug 15, 2018 at 10:58 AM, Wido den Hollander <wido@xxxxxxxx> wrote:


On 08/15/2018 05:57 PM, Robert Stanford wrote:
>
>  Thank you Wido.  I don't want to make any assumptions so let me verify,
> that's 10GB of DB per 1TB storage on that OSD alone, right?  So if I
> have 4 OSDs sharing the same SSD journal, each 1TB, there are 4 10 GB DB
> partitions for each?
>

Yes, that is correct.

Each OSD needs 10GB/1TB of storage of DB. So size your SSD according to
your storage needs.

However, it depends on the workload if you need to offload WAL+DB to a
SSD. What is the workload?

Wido

> On Wed, Aug 15, 2018 at 1:59 AM, Wido den Hollander <wido@xxxxxxxx
> <mailto:wido@xxxxxxxx>> wrote:
>
>
>
>     On 08/15/2018 04:17 AM, Robert Stanford wrote:
>     > I am keeping the wal and db for a ceph cluster on an SSD.  I am using
>     > the masif_bluestore_block_db_size / masif_bluestore_block_wal_size
>     > parameters in ceph.conf to specify how big they should be.  Should these
>     > values be the same, or should one be much larger than the other?
>     >
>
>     This has been answered multiple times on this mailinglist in the last
>     months, a bit of searching would have helped.
>
>     Nevertheless, 1GB for the WAL is sufficient and then allocate about 10GB
>     of DB per TB of storage. That should be enough in most use cases.
>
>     Now, if you can spare more DB space, do so!
>
>     Wido
>
>     >  R
>     >
>     >
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxx.com>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>     >
>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux