Re: 回复: half performace with keyvalue backend in 0.87

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yep, be patient. Need more time

On Mon, Nov 10, 2014 at 9:33 AM, 廖建锋 <Derek@xxxxxxxxx> wrote:
> Haomai wang,
>        Do you have proresss on this performance issue?
>
>
>
> 发件人: Haomai Wang
> 发送时间: 2014-10-31 10:05
> 收件人: 廖建锋
> 抄送: ceph-users; ceph-users
> 主题: Re: Re:  回复: half performace with keyvalue backend in 0.87
>
> ok. I will explore it.
>
> On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 <Derek@xxxxxxxxx> wrote:
>> I am not sure if it seq or ramdon,  i just use rsync to copy millions
>> small
>> pic file form our pc server to ceph cluster
>>
>> 发件人: Haomai Wang
>> 发送时间: 2014-10-31 09:59
>> 收件人: 廖建锋
>> 抄送: ceph-users; ceph-users
>> 主题: Re: Re:  回复: half performace with keyvalue backend in 0.87
>>
>> Thanks, recently I mainly focus on rbd performance for it(random small
>> write).
>>
>> I want to know your test situation. Is it seq write?
>>
>> On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 <Derek@xxxxxxxxx> wrote:
>>> which i can telll is :
>>>       in 0.87 , osd's writting under 10MB/s ,but io utilization is about
>>> 95%
>>>      in 0.80.6, osd's writting about 20MB/s, but io utilization is about
>>> 30%
>>>
>>> iostat  -mx 2 with 0.87
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
>>> %util
>>> sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
>>> sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
>>> sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75
>>>
>>> avg-cpu: %user %nice %system %iowait %steal %idle
>>> 3.16 0.00 1.01 17.45 0.00 78.38
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
>>> %util
>>> sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
>>> sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
>>> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
>>>
>>> 发件人: Haomai Wang
>>> 发送时间: 2014-10-31 09:40
>>> 收件人: 廖建锋
>>> 抄送: ceph-users; ceph-users
>>> 主题: Re:  回复: half performace with keyvalue backend in 0.87
>>>
>>> Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
>>> But at Giant, other performance optimization has been applied. Could
>>> you tell more about your tests?
>>>
>>> On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 <Derek@xxxxxxxxx> wrote:
>>>> Also found the other problem is:  the ceph osd directory has millions
>>>> small
>>>> files which will cause performance issue
>>>>
>>>> 1008 => # pwd
>>>> /var/lib/ceph/osd/ceph-8/current
>>>>
>>>> 1007 => # ls |wc -l
>>>> 21451
>>>>
>>>> 发件人: ceph-users
>>>> 发送时间: 2014-10-31 08:23
>>>> 收件人: ceph-users
>>>> 主题:  half performace with keyvalue backend in 0.87
>>>> Dear Ceph,
>>>>       I used keyvalue backend in 0.80.6 and 0.80.7, the average speed
>>>> with
>>>> rsync millions small files is 10M byte /second
>>>> when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,
>>>> I
>>>> don't why , is there any tunning option for this?
>>>> will superblock cause those performance slow down?
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Wheat
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux