Re: How I disable DB and WAL for an OSD for improving 8K performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lof, Boris.

Do you mean that, the best practice would be to create a DB partition on the SSD as OSD, and disable WAL by setting bluestore_prefer_deferred_size= 0, and bluestore_prefer_deferred_size_ssd=0

Or there is no need to create a DB partition on the SSD and let the OSD manages everything including data and metadata?  

Do not know which is the best strategy in terms of performance......

Samuel



huxiaoyu@xxxxxxxxxxxx
 
From: Boris Behrens
Date: 2022-04-25 10:26
To: huxiaoyu@xxxxxxxxxxxx
CC: ceph-users
Subject: Re:  How I disable DB and WAL for an OSD for improving 8K performance
Hi Samuel,

IIRC at least the DB (I am not sure if flash drives use the 1GB WAL) is always located on the same device as the OSD, when it is not configured somewhere else. On SSDs/NVMEs people tend to not separate the DB/WAL on other devices.

Cheers
 Boris

Am Mo., 25. Apr. 2022 um 10:09 Uhr schrieb huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx>:
Dear Ceph folks,

When setting up an all flash Ceph cluster with 8 nodes, I am wondering whether should i disable (or turn off)  DB and WAL for SSD based OSDs for better 8K IO performance. 

Nornally for HDD OSDs, i used to create a 30GB+ partitions on separate SSDs as DB/WAL for them. For (enterprise level)SSD-based OSDs,  one way is to create a partition on every SSD OSD as DB/WAL, and then use the rest as the data partition of the OSD. However, I am wondering whether such operation would improve performance or degrade performance? Since WAL is just a pure write buffering, it could cause double writes on the same SSD and thus cause damage to the performance...

Any comments, suggestions are highly appreciated,

Samuel



huxiaoyu@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux