Performance degrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



IOWait looks like this:

07:23:07 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft
%steal  %guest   %idle
07:23:07 PM  all    9.38    0.00   12.92    2.42    0.05    0.09
0.00    0.00   75.14
07:23:07 PM    0   23.93    0.00    7.57    1.70    0.00    0.01
0.00    0.00   66.79
07:23:07 PM    1   19.47    0.00   11.30    1.82    0.11    0.53
0.00    0.00   66.77
07:23:07 PM    2   12.92    0.00   12.26    1.55    0.31    0.04
0.00    0.00   72.92
07:23:07 PM    3    7.97    0.00   13.28    1.55    0.00    0.01
0.00    0.00   77.18
07:23:07 PM    4    4.65    0.00   17.09    7.68    0.02    0.06
0.00    0.00   70.49
07:23:07 PM    5    4.15    0.00   14.32    1.70    0.00    0.01
0.00    0.00   79.82
07:23:07 PM    6    2.03    0.00   13.75    1.66    0.00    0.01
0.00    0.00   82.54
07:23:07 PM    7    1.13    0.01   13.34    1.61    0.00    0.01
0.00    0.00   83.90

Roland

2010/7/19 Tejas N. Bhise <tejas at gluster.com>:
> Hi Paul,
>
> You make a good point there.
>
> Hi Roland,
>
> Generally we have observed that it's good to have same number of gluster threads as the kernel threads ( or number of cores if not hyper-threading). You maybe be not just bottle-necking on CPU but also on disk. Did you check the iowaits ?
>
> One good way, since you have a powerful CPU is to have host/software raid ( unless you have hardware raid already ). Use lvm and stripe across all/part of the disks ( with raid5/raid6 if you like ). A 64k stripe size seems to work well ( not the best for all applications, so you will have to do your own experiment there for best performance ).
>
> Regards,
> Tejas.
>
> ----- Original Message -----
> From: "pkoelle" <pkoelle at gmail.com>
> To: gluster-users at gluster.org
> Sent: Monday, July 19, 2010 9:57:25 PM
> Subject: Re: Performance degrade
>
> Am 19.07.2010 17:10, schrieb Roland Rabben:
>> I did try that on one of the clients. I removed all performance
>> translators except io-threads. No imporovement.
>> The server still use a hughe ammount of CPU.
> 36*8 = 288 threads alone for IO. I don't know specifics about GlusterFS
> but common knowledge suggests high thread counts are bad. You end up
> using all your CPU waiting for locks and in context switches.
>
> Why do you export each disk seperately? You don't seem to care about
> disk failure so you could put all disks in one LVM VG and export LVs
> from that.
>
> cheers
> ?Paul
>
>>
>> Roland
>>
>> 2010/7/19 Andre Felipe Machado<andremachado at techforce.com.br>:
>>> Hello,
>>> Did you try to minimize or even NOT use any cache?
>>> With so many nodes, the cache coherency between them may had become an issue...
>>> Regards.
>>> Andre Felipe Machado
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: roland at jotta.no


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux