Re: I/O Speed Comparisons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

i would like to see the high ping problem fixed in 0.56.4

Thanks!

Stefan

Am 11.03.2013 um 23:56 schrieb Josh Durgin <josh.durgin@xxxxxxxxxxx>:

> On 03/11/2013 08:02 AM, Wolfgang Hennerbichler wrote:
>> also during writes.
>> I've tested it now on linux with a virtio disk-drive:
>> 
>>     <disk type='network' device='disk'>
>>       <driver name='qemu' type='raw'/>
>>       <source protocol='rbd' name='rd/backup:rbd_cache=1'>
>>         <host name='rd-c2.ceph' port='6789'/>
>>         <host name='rd-c1.ceph' port='6789'/>
>>         <host name='wag-c1.ceph' port='6789'/>
>>       </source>
>>       <target dev='vda' bus='virtio'/>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
>> function='0x0'/>
>>     </disk>
>> 
>> virsh console to the running VM.
>> Next - Write Test:
>> dd if=/dev/zero of=/bigfile bs=2M &
>> 
>> Serial console gets jerky, VM gets unresponsive. It doesn't crash, but
>> it's not 'healthy' either. CPU load isn't very high, it's in the waiting
>> state a lot:
> 
> Does this only happen with rbd_cache turned on? If so, it may be the
> same cause as http://tracker.ceph.com/issues/3737.
> 
> Josh
> 
>> Cpu(s):  0.0%us,  4.7%sy,  0.0%ni, 26.1%id, 68.4%wa,  0.0%hi,  0.3%si,
>> 0.5%st
>>  1170 root      20   0 11736 2452  556 D  8.5  0.5   0:00.80 dd
>> 
>> 
>> Wolfgang
>> 
>> On 03/11/2013 01:42 PM, Mark Nelson wrote:
>>> I guess first question is does the jerky mouse behavior only happen
>>> during reads or writes too?  How is the CPU utilization in each case?
>>> 
>>> Mark
>>> 
>>> On 03/11/2013 01:30 AM, Wolfgang Hennerbichler wrote:
>>>> Let me know if I can help out with testing somehow.
>>>> 
>>>> Wolfgang
>>>> ________________________________________
>>>> Von: ceph-users-bounces@xxxxxxxxxxxxxx
>>>> [ceph-users-bounces@xxxxxxxxxxxxxx]&quot; im Auftrag von &quot;Mark
>>>> Nelson [mark.nelson@xxxxxxxxxxx]
>>>> Gesendet: Samstag, 09. März 2013 20:33
>>>> Bis: ceph-users@xxxxxxxxxxxxxx
>>>> Betreff: Re:  I/O Speed Comparisons
>>>> 
>>>> Thanks for all of this feedback guys!  It gives us some good data to try
>>>> to replicate on our end.  Hopefully I'll have some time next week to
>>>> take a look.
>>>> 
>>>> Thanks!
>>>> Mark
>>>> 
>>>> On 03/09/2013 08:14 AM, Erdem Agaoglu wrote:
>>>>> Mark,
>>>>> 
>>>>> If it's any help, we've done a small totally unreliable benchmark on our
>>>>> end. For a KVM instance, we had:
>>>>> 260MB/s write, 200MB/s read on local SAS disks, attached as LVM LVs,
>>>>> 250MB/s write, 90MB/s read on RBD, 32 osds, all SATA.
>>>>> 
>>>>> All sequential, a 10G network. It's more than enough currently but we'd
>>>>> like to improve RBD read performance.
>>>>> 
>>>>> Cheers,
>>>>> 
>>>>> 
>>>>> On Sat, Mar 9, 2013 at 7:27 AM, Andrew Thrift <andrew@xxxxxxxxxxxxxxxxx
>>>>> <mailto:andrew@xxxxxxxxxxxxxxxxx>> wrote:
>>>>> 
>>>>>      Mark,
>>>>> 
>>>>> 
>>>>>      I would just like to add, we too are seeing the same behavior with
>>>>>      QEMU/KVM/RBD.  Maybe it is a common symptom of high IO with this
>>>>> setup.
>>>>> 
>>>>> 
>>>>> 
>>>>>      Regards,
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>      Andrew
>>>>> 
>>>>> 
>>>>>      On 3/8/2013 12:46 AM, Mark Nelson wrote:
>>>>> 
>>>>>          On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>>              On 03/06/2013 02:31 PM, Mark Nelson wrote:
>>>>>              t
>>>>> 
>>>>>                  If you are doing sequential reads, you may benefit by
>>>>>                  increasing the
>>>>>                  read_ahead_kb value for each device in
>>>>>                  /sys/block/<device>/queue on the
>>>>>                  OSD hosts.
>>>>> 
>>>>> 
>>>>>              Thanks, that didn't really help. It seems the VM has to
>>>>>              handle too much
>>>>>              I/O, even the mouse-cursor is jerking over the screen when
>>>>>              connecting
>>>>>              via vnc. I guess this is the wrong list, but it has somehow
>>>>>              to do with
>>>>>              librbd in connection with kvm, as the same machine on LVM
>>>>>              works just ok.
>>>>> 
>>>>> 
>>>>>          Thanks for the heads up Wolfgang.  I'm going to be looking into
>>>>>          QEMU/KVM
>>>>>          RBD performance in the coming weeks so I'll try to watch out
>>>>> for
>>>>>          this
>>>>>          behaviour.
>>>>> 
>>>>> 
>>>>>              Wolfgang
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux