Re: Regarding bluestore release schedule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>>I know it is early, but do we have any chance to know your test result? and
>>how about the result on SSD?

They are some link to spreadshets results on performance weekly etherpad

http://pad.ceph.com/p/performance_weekly

last results are:
https://drive.google.com/open?id=0B2gTBZrkrnpZS1VVMTJTNnQta28

----- Mail original -----
De: "Zhang Huan" <zhanghuan@xxxxxxxxx>
À: "Sage Weil" <sweil@xxxxxxxxxx>
Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
Envoyé: Jeudi 14 Janvier 2016 07:37:14
Objet: Re: Regarding bluestore release schedule 

> On Jan 13, 2016, at 9:45 PM, Sage Weil <sweil@xxxxxxxxxx> wrote: 
> 
> On Wed, 13 Jan 2016, Zhang Huan wrote: 
>> Hello Sage, 
>> 
>> I have been watching the development of bluestore for a while. I am 
>> interested in bluestore because I believe putting object store on top of 
>> raw device is the right way. I would like to know if you have any 
>> release schedule, and which version it will be merged into? I also 
>> notice that there is big locks (Collection->lock and 
>> StupidAllocator->lock, etc) in the write path, this might be an issue 
>> for SSD. Do you have any plan to break it into finer locks? 
> 
> It's in master now and it will be part of the jewel release. It won't 
> be the default choice yet, since it is still very new and unproven, but so 
> far performance is looking better than FileStore on HDDs. 
I know it is early, but do we have any chance to know your test result? and 
how about the result on SSD? 

> 
> The collection->lock doesn't need to be broken up since it's hidden 
> beneath the OSD's per-pg lock anyway. 
Thanks, I found the per-pg lock. If there exist per-pg locks, the 
per-collection locks are actually not needed? 

> The allocator lock could be 
> sharded, though, by creating multiple allocation groups (ala XFS). This 
> is partly why the Allocator interface is abstract and the current 
> implementation is called StupidAllocator :). If you can demonstrate 
> that the lock is affecting performance we should definitely improve that! 
It would be an interesting to look into:-) 

> 
> sage 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@xxxxxxxxxxxxxxx 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 


-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@xxxxxxxxxxxxxxx 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux