Performance degrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2010/7/19 pkoelle <pkoelle at gmail.com>:
> Am 19.07.2010 18:53, schrieb Roland Rabben:
>>
>> I am not sure about the number of threads I should use. Your arguement
>> sounds logical and I should try that.
>
> A cheap way to test the theory would be to lower io-threads to 2 or 3 per
> filesystem.


Tried that now. No real noticable difference in CPU usage. I set
io-threads to 2 per brick.
Any suggestions?

>>
>> First of all I care about NOT loosing files. That's why I replicate files.
>
> Then I suggest you provide some redundancy at the block level (SW-RAID?).
> Doing a full resync just because one disk failed seems risky but YMMV.
>
>> I am not familiar with LVM and how to use it. Is this a normal setup
>> for Gluster users? What are the pros and cons with LVM in a Glusterfs
>> setup?
>
> GlusterFS shouldn't care as it operates on the filesystem level, LVM logical
> volumes (think partitions) are block devices. LVM allows you to group your
> disk into volumes and create partitions out of them (no reboot needed). We
> haven't noticed any performance overhead.
>
>>
>> Is it possible to create logical volumes from disks ?already
>> containing data, or would they need to be formatted? They are
>> formatted EXT3 today.
>
> No, LVM has its own partition type.


I was really hoping to avoid the need to copy all data out of my
storage servers to reconfigure them and copy the content back.

Any advice on how to get out of this problem is very welcome.

Regards

Roland

>
> cheers
> ?Paul
>
>>
>> Regards
>> Roland Rabben
>>
>> 2010/7/19 pkoelle<pkoelle at gmail.com>:
>>>
>>> Am 19.07.2010 17:10, schrieb Roland Rabben:
>>>>
>>>> I did try that on one of the clients. I removed all performance
>>>> translators except io-threads. No imporovement.
>>>> The server still use a hughe ammount of CPU.
>>>
>>> 36*8 = 288 threads alone for IO. I don't know specifics about GlusterFS
>>> but
>>> common knowledge suggests high thread counts are bad. You end up using
>>> all
>>> your CPU waiting for locks and in context switches.
>>>
>>> Why do you export each disk seperately? You don't seem to care about disk
>>> failure so you could put all disks in one LVM VG and export LVs from
>>> that.
>>>
>>> cheers
>>> ?Paul
>>>
>>>>
>>>> Roland
>>>>
>>>> 2010/7/19 Andre Felipe Machado<andremachado at techforce.com.br>:
>>>>>
>>>>> Hello,
>>>>> Did you try to minimize or even NOT use any cache?
>>>>> With so many nodes, the cache coherency between them may had become an
>>>>> issue...
>>>>> Regards.
>>>>> Andre Felipe Machado
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: roland at jotta.no


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux