Re: Streaming perf problem on 10g

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Shehjar,

Have you tested with another file system besides ext4, like XFS or ReiserFS?

How many SSD's in the configuration? What is the storage controller
(SAS, SATA, PCIe direct-connect)? 1.5GB/sec a lot of speed...seems
like at least 8 SSDs but please confirm.

Also, you are not copying enough data in this test....how much DRAM in
the server with the SSD? I would run dd with an IO amount at least
double or triple the the amount of memory in the system. 1GB is not
enough.

-Tommy

On Thu, Nov 4, 2010 at 1:20 AM, Shehjar Tikoo <shehjart@xxxxxxxxxxx> wrote:
> fibreraid@xxxxxxxxx wrote:
>>
>> Hi Shehjar,
>>
>> Can you provide the exact dd command you are running both locally and
>> for the NFS mount?
>
> on the ssd:
>
> # dd if=/dev/zero of=bigfile4 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.690624 s, 1.5 GB/s
> # dd if=/dev/zero of=bigfile4 bs=1M count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 1.72764 s, 607 MB/s
>
> The ssd file system is ext4 mounted as
> (rw,noatime,nodiratime,data=writeback)
>
> Here is another strangeness, using oflag=direct gives better performance:
>
> On nfs mount:
> # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 3.7063 s, 283 MB/s
> # rm /tmp/testmount/bigfile3
> # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 9.66876 s, 108 MB/s
>
> The kernel on both server and client is 2.6.32-23, so I think this
> regression might be in play.
>
> http://thread.gmane.org/gmane.comp.file-systems.ext4/20360
>
> Thanks
> -Shehjar
>
>>
>> -Tommy
>>
>> On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <joe.landman@xxxxxxxxx>
>> wrote:
>>>
>>> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on
>>>> both client and server.
>>>>
>>>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in
>>>> 64k blocks.
>>>>
>>> If the size of this file is comparable to or smaller than the client or
>>> server ram, this number is meaningless.
>>>
>>>> The network is performing fine with many Gbps of iperf throughput.
>>>>
>>> GbE gets you 1 Gbps.  10GbE may get you from 3-10 Gbps, depending upon
>>> many
>>> things.  What are  your numbers?
>>>
>>>> Yet, the dd write performance over the nfs mount point ranges from
>>>> 96-105
>>>> Mbps for a 6gb file in 64k blocks.
>>>>
>>> Sounds like you are writing over the gigabit, and not the 10GbE
>>> interface.
>>>
>>>> I've tried changing the tcp_slot_table_entries and the wsize but there
>>>> is
>>>> negligible gain from these.
>>>>
>>>> Does it sound like a client side inefficiency?
>>>>
>>> Nope.
>>>
>>> --
>>> Joseph Landman, Ph.D
>>> Founder and CEO
>>> Scalable Informatics Inc.
>>> email: landman@xxxxxxxxxxxxxxxxxxxxxxx
>>> web : http://scalableinformatics.com
>>> phone: +1 734 786 8423 x121
>>> fax : +1 866 888 3112
>>> cell : +1 734 612 4615
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux