3.2.2 Performance Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good news!

That seems to have improved performance quite a bit so I'd like to share
what I've done. Originally, with only distribute configured on a volume, I
was seeing 100MB/s writes. When moving to distribute/replicate, I was
getting 10MB/s or less. Avati suggested that I'm running out of extended
attribute space for inodes.

I have reformatted /dev/sdb which is what I'm currently using as my gluster
export. I have created a single primary partition (/dev/sdb1). My version
(CentOS 5) of mke2fs (mkfs.ext3) has an undocumented option for increasing
the inode-size attribute:

/sbin/mkfs.ext3 -I 512 /dev/sdb1

Recreating my volume with dist/replicate:

[root at vm-container-0-3 ~]# gluster volume info pifs

Volume Name: pifs
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

and I'm consistently seeing 30+ MB/s writes with no changes to the network
setup.

Thanks Avati!!

   --joey



On Tue, Aug 16, 2011 at 9:31 AM, Joey McDonald <joey at scare.org> wrote:

> Hi Avati,
>
>
>> Write performance in replicate is not only a throughput factor of disk and
>> network, but also involves xattr performance. xattr performance is a
>> function of the inode size in most of the disk filesystems. Can you give
>> some more details about the backend filesystem, specifically the inode size
>> with which it was formatted? If it was ext3 with the default 128byte inode,
>> it is very likely you might be running out of in-inode xattr space (due to
>> enabling marker-related features like geo-sync or quota?) and hitting data
>> blocks. If so, please reformat with 512byte or 1KB inode size.
>>
>> Also, what about read performance in replicate?
>>
>
> Thanks for your insight on this issue, we are using ext3 for the gluster
> partition with CentOS 5 default inode size:
>
> [root at vm-container-0-0 ~]# tune2fs -l /dev/sdb1 | grep Inode
> Inode count:              244219904
> Inodes per group:         32768
> Inode blocks per group:   1024
> Inode size:               128
>
> I'll reformat sdb1 with 512 bytes and recreate my gluster volumes with
> distribute/replicate and run my benchmark tests again.
>
>
>    --joey
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20110816/1bfb773e/attachment.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux