Re: connection speeds between nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mar 8, 2011, at 12:25 PM, John Hodrien <J.H.Hodrien@xxxxxxxxxxx> wrote:

> On Tue, 8 Mar 2011, Ross Walker wrote:
> 
>> Well on my local disk I don't cache the data of tens or hundreds of clients
>> and a server can have a memory fault and oops just as easily as any client.
>> 
>> Also I believe it doesn't sync every single write (unless mounted on the
>> client sync which is only for special cases and not what I am talking about)
>> only when the client issues a sync or when the file is closed. The client is
>> free to use async io if it wants, but the server SHOULD respect the clients
>> wishes for synchronous io.
>> 
>> If you set the server 'async' then all io is async whether the client wants
>> it or not.
> 
> I think you're right that this is how it should work, I'm just not entirely
> sure that's actually generally the case (whether that's because typical
> applications try to do sync writes or if it's for other reasons, I don't
> know).

As always YMMV, but on the whole it's how it works.

ESX is an exception, it does O_FSYNC on each write cause it needs to know for certain that each completed.

> Figures for just changing the server to sync, everything else identical.
> Client does not have 'sync' set as a mount option.  Both attached to the same
> gigabit switch (so favouring sync as far as you reasonably could with
> gigabit):
> 
> sync;time (dd if=/dev/zero of=testfile bs=1M count=10000;sync)
> 
> async: 78.8MB/sec
>  sync: 65.4MB/sec
> 
> That seems like a big enough performance hit to me to at least consider the
> merits of running async.

Yes, disabling the safety feature will make it run faster. Just as disabling the safety on a gun will make it faster in a draw.

> That said, running dd with oflag=direct appears to bring the performance up to
> async levels:
> 
> oflag=direct with  sync nfs export: 81.5 MB/s
> oflag=direct with async nfs export: 87.4 MB/s
> 
> But if you've not got control over how your application writes out to disk,
> that's no help.

Most apps unfortunately don't allow one to configure how it handles io reads/writes, so you're stuck with how it behaves.

A good sized battery backed write-back cache will often negate the O_FSYNC penalty.

-Ross

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux