RE: Bluestore with ZS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ohh, for compilation ?
Hmm, strange , for me make -j<core> is not much different. Need to see..

-----Original Message-----
From: Somnath Roy 
Sent: Tuesday, December 13, 2016 1:40 PM
To: 'Mark Nelson'
Cc: ceph-devel
Subject: RE: Bluestore with ZS

Have you set bluestore_num_kv_sync_threads = <number> (say 4 ?)

-----Original Message-----
From: Mark Nelson [mailto:mnelson@xxxxxxxxxx]
Sent: Tuesday, December 13, 2016 1:13 PM
To: Somnath Roy
Cc: ceph-devel
Subject: Re: Bluestore with ZS

I'm compiling it now in fact, though for some reason it's only using a single thread to compile so it's going *very* slow.

Mark

On 12/13/2016 02:58 PM, Somnath Roy wrote:
> Mark,
> Multi kv sync code is quite stable and giving > 2x performance bump for ZS than single threaded code. If you are planning to try out ZS I would highly recommend to try with the following code base.
>
> https://github.com/somnathr/ceph/tree/wip-bluestore-multi-kv-sync-thre
> ad
>
> Add the following in the ceph.conf in addition to what I mentioned below.
>
> bluestore_num_kv_sync_threads = <number>
>
> Let me know how it goes for you.
>
> Thanks & Regards
> Somnath
>
>
> -----Original Message-----
> From: Somnath Roy
> Sent: Wednesday, December 07, 2016 2:37 AM
> To: 'Mark Nelson'
> Cc: 'ceph-devel'
> Subject: RE: Bluestore with ZS
>
> Sage,
> Here is the multi kv sync code for your review.
>
> https://github.com/somnathr/ceph/tree/wip-bluestore-multi-kv-sync-thre
> ad
>
> It is giving ZS significant performance boost but we believe we can optimize shim further. We are working on that.
>
> BTW, I have coded on top of your allocator changes
> (https://github.com/ceph/ceph/pull/12343)
>
> Thanks & Regards
> Somnath
>
>
> -----Original Message-----
> From: Somnath Roy
> Sent: Tuesday, December 06, 2016 7:56 AM
> To: 'Mark Nelson'
> Cc: ceph-devel
> Subject: RE: Bluestore with ZS
>
> No Problem , take your time..Hopefully, by then we can give you stable multi_kv version.
>
> -----Original Message-----
> From: Mark Nelson [mailto:mnelson@xxxxxxxxxx]
> Sent: Tuesday, December 06, 2016 7:54 AM
> To: Somnath Roy
> Cc: ceph-devel
> Subject: Re: Bluestore with ZS
>
> Excellent Somnath!
>
> I will attempt to test this today, though I also am going to be looking at the new RBD erasure coding stuff so it might be a day or two.
>
> Mark
>
> On 12/06/2016 02:33 AM, Somnath Roy wrote:
>> Mark,
>> Please find the Bluestore + ZS integrated code synced with today's master in the following location.
>>
>> https://github.com/somnathr/ceph/tree/wip-bluestore-zs
>>
>> As discussed in the standup , this is with single kv_sync_thread.
>> I am still cleaning up multi kv_sync_thread  version and will send out a pull request for Sage to review hopefully by tomorrow.
>>
>> Here is the steps you need to follow to use this.
>>
>> 1.  ./do_cmake.sh -DWITH_ZS=1
>>
>> 2. make and make install
>>
>> 3. In the ceph.conf , use the following option.
>>
>> enable experimental unrecoverable data corrupting features = 
>> bluestore zs rocksdb bluestore_sync_submit_transaction = false 
>> bluestore_kvbackend=zs
>>
>> With smaller volumes you will be seeing rocks outperforming ZS , but, for bigger volumes ZS is catching up fast.
>> The code (shim layer and thus BlueStore)  is no way optimally using ZS yet and we are in process of optimizing it further (with multi kv_sync, more batching etc. etc.).
>> Will keep community posted on this.
>>
>> Thanks & Regards
>> Somnath
>>
>>
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux