Re: Does it impact write performance when SSD applies into block.wal (not block.db)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Hi everyone,
> 
> I saw the bluestore can separate block.db, block.wal.
> In my case, I'd like to apply hybrid device which uses SSD, HDD to improve
> the small data write performance.
> but I don't have enough SSD to cover block.db and block.wal.
> so I think it can impact performance even though SSD applies into just
> block.wal.
> I just know that block.wal depends on rocksdb cache size as parameters. SSD
> might not need too much.
> 
> 1.
> When I use SSD just into block.wal,
> Does it impact the write performance of the small data?

I *think* by default only writes that are smaller than the min_alloc_size the OSD was created with will be staged in the WAL.  In recent releases that defaults to 4KB.


> 3.
> How much SSD do I need for block.wal relative to HDD(if I have 100TB)?

cf.  https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/SBNRW5R22IE3OVOR57DRL2ULFTWXLAGQ/

The WAL size is I believe constant, 1GB.

Be careful that you don’t share your SSD devices with too many HDDs.  In the Filestore days conventional wisdom was to not share a SAS/SATA SSD across more than 4-5 HDD OSDs; an NVMe SSD perhaps as high as 10.  If you exceed this ratio you may end up slower than with pure HDD OSDs.

Naturally the best solution is to not use HDDs at all ;)

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux