Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/20/2011 02:01 PM, Mohit Anchlia wrote:
> Is 128K block size right size given that file sizes I have is from 70K - 2MB?
>
> Please find
>
> [root at dsdb1 ~]# dd if=/dev/zero of=/data/big.file bs=128k count=80k
> 81920+0 records in
> 81920+0 records out
> 10737418240 bytes (11 GB) copied, 14.751 seconds, 728 MB/s



> [root at dsdb1 ~]# echo 3>  /proc/sys/vm/drop_caches
> [root at dsdb1 ~]# dd of=/dev/null if=/data/big.file bs=128k
> 81920+0 records in
> 81920+0 records out
> 10737418240 bytes (11 GB) copied, 3.10485 seconds, 3.5 GB/s

Hmm ... this looks like it came from cache.  4 drive RAID0's aren't even 
remotely this fast.

Add an oflag=direct to the first dd, and an iflag=direct to the second 
dd so we can avoid the OS memory cache for the moment (looks like the 
driver isn't respecting the drop caches command, or you had no space 
between the 3 and the > sign).



>
> On Wed, Apr 20, 2011 at 10:49 AM, Joe Landman
> <landman at scalableinformatics.com>  wrote:
>> On 04/20/2011 01:42 PM, Mohit Anchlia wrote:
>>>
>>>   mount
>>> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
>>> proc on /proc type proc (rw)
>>> sysfs on /sys type sysfs (rw)
>>> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
>>> /dev/sdb1 on /boot type ext3 (rw)
>>> tmpfs on /dev/shm type tmpfs (rw)
>>> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
>>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
>>> /dev/sda1 on /data type ext3 (rw)
>>> glusterfs#dsdb1:/stress-volume on /data/mnt-stress type fuse
>>> (rw,allow_other,default_permissions,max_read=131072)
>>
>> ok ...
>>
>> so gluster is running atop /data/mnt-stress, which is on /dev/sda1 and is
>> ext3.
>>
>> Could you do this
>>
>>         dd if=/dev/zero of=/data/big.file bs=128k count=80k
>>         echo 3>  /proc/sys/vm/drop_caches
>>         dd of=/dev/null if=/data/big.file bs=128k
>>
>> so we can see the write and then read performance using 128k blocks?
>>
>> Also, since you are using the gluster native client, you don't get all the
>> nice NFS caching bits.  Gluster native client is somewhat slower than the
>> NFS client.
>>
>> So lets start with the write/read speed of the system before we deal with
>> the gluster side of things.
>>
>>>
>>>
>>> On Wed, Apr 20, 2011 at 10:39 AM, Joe Landman
>>> <landman at scalableinformatics.com>    wrote:
>>>>
>>>> On 04/20/2011 01:35 PM, Mohit Anchlia wrote:
>>>>>
>>>>> Should that command be there by default? I couldn't find lsscsi
>>>>
>>>> How about
>>>>
>>>>         mount
>>>>
>>>> output?
>>>>
>>>>
>>>> --
>>>> Joseph Landman, Ph.D
>>>> Founder and CEO
>>>> Scalable Informatics Inc.
>>>> email: landman at scalableinformatics.com
>>>> web  : http://scalableinformatics.com
>>>>        http://scalableinformatics.com/sicluster
>>>> phone: +1 734 786 8423 x121
>>>> fax  : +1 866 888 3112
>>>> cell : +1 734 612 4615
>>>>
>>
>>
>> --
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics Inc.
>> email: landman at scalableinformatics.com
>> web  : http://scalableinformatics.com
>>        http://scalableinformatics.com/sicluster
>> phone: +1 734 786 8423 x121
>> fax  : +1 866 888 3112
>> cell : +1 734 612 4615
>>


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux