Re: LevelDB support status is still experimental on Giant?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Compared to Filestore on SSD(We run levelDB on top of SSD). The usage pattern is RBD sequential write(64K * QD8) and random write( 4K * QD8), read seems on par.

 

I would suspect KV backend on HDD will be even worse ,compared to Filestore on HDD.

 

From: Satoru Funai [mailto:satoru.funai@xxxxxxxxx]
Sent: Tuesday, December 2, 2014 1:27 PM
To: Chen, Xiaoxi
Cc: ceph-users@xxxxxxxx; Haomai Wang
Subject: Re: [ceph-users] LevelDB support status is still experimental on Giant?

 

Hi Xiaoxi,
Thanks for very useful information.
Can you share more details about "Terrible bad performance" is compare against what? and what kind of usage pattern?
I'm just interested in key/value backend for more cost/performance without expensive HW such as ssd/fusion io.
Regards,
Satoru Funai


差出人: "Xiaoxi Chen" <xiaoxi.chen@xxxxxxxxx>
宛先: "Haomai Wang" <haomaiwang@xxxxxxxxx>
Cc: "Satoru Funai" <satoru.funai@xxxxxxxxx>, ceph-users@xxxxxxxx
送信済み: 2014121, 月曜日 午後 11:26:56
件名: RE: [ceph-users] LevelDB support status is still experimental on Giant?

Range query is not that important in nowadays SSD----you can see very high read random read IOPS in ssd spec, and getting higher day by day.The key problem here is trying to exactly matching one query(get/put) to one SSD IO(read/write), eliminate the read/write amplification. We kind of believe OpenNvmKV may be the right approach.

 

Back to the context of Ceph,  can we find some use case of nowadays key-value backend?  We would like to learn from community what’s the workload pattern if you wants a K-V backed Ceph? Or just have a try?  I think before we get a suitable DB backend ,we had better off to optimize the key-value backend code to support specified kind of load.

 

 

 

From: Haomai Wang [mailto:haomaiwang@xxxxxxxxx]
Sent: Monday, December 1, 2014 10:14 PM
To: Chen, Xiaoxi
Cc: Satoru Funai; ceph-users@xxxxxxxx
Subject: Re: [ceph-users] LevelDB support status is still experimental on Giant?

 

Exactly, I'm just looking forward a better DB backend suitable for KeyValueStore. It maybe traditional B-tree design.

 

Kinetic original I think it was a good backend, but it doesn't support range query :-(

 

 

 

On Mon, Dec 1, 2014 at 10:04 PM, Chen, Xiaoxi <xiaoxi.chen@xxxxxxxxx> wrote:

We have tested it for a while, basically it seems kind of stable but show terrible bad performance.

 

This is not the fault of Ceph , but levelDB, or more generally,  all K-V storage with LSM design(RocksDB,etc), the LSM tree structure naturally introduce very large write amplification---- 10X to 20X when you have tens GB of data per OSD. So you can always see very bad sequential write performance (~200MB/s for a 12SSD setup), we can share more details on the performance meeting.

 

To this end,  key-value backend with LevelDB is not useable for RBD usage, but maybe workable(not tested) in the LOSF cases ( tons of small objects stored via rados , k-v backend can prevent the FS metadata become the bottleneck)

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Haomai Wang
Sent: Monday, December 1, 2014 9:48 PM
To: Satoru Funai
Cc: ceph-users@xxxxxxxx
Subject: Re: [ceph-users] LevelDB support status is still experimental on Giant?

 

Yeah, mainly used by test env. 

 

On Mon, Dec 1, 2014 at 6:29 PM, Satoru Funai <satoru.funai@xxxxxxxxx> wrote:

Hi guys,
I'm interested in to use key/value store as a backend of Ceph OSD.
When firefly release, LevelDB support is mentioned as experimental,
is it same status on Giant release?
Regards,

Satoru Funai
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 

--

Best Regards,

Wheat



 

--

Best Regards,

Wheat

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux