On 30/01/18 16:22, J. Bruce Fields wrote:
On Tue, Jan 30, 2018 at 03:29:41PM +0000, Terry Barnaby wrote:
On 30/01/18 15:09, J. Bruce Fields wrote:
By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.
Ok, that is far more reasonable, so something is up on my systems :)
What speed do you get with the server export set to async ?
I tried just now and got 4m2s.
The drives probably still have to do a seek or two per create, the
difference now is that we don't have to wait for one create to start the
next one, so the drives can work in parallel.
So given that I'm striping across two drives, I *think* it makes sense
that I'm getting about double the performance with the async export
option.
But that doesn't explain the difference between async and local
performance (22s when I tried the same untar directly on the server, 25s
when I included a final sync in the timing). And your numbers are a
complete mystery.
I have just tried running the untar on our work systems. These are again
Fedora27 but newer hardware.
I set one of the servers NFS exports to just rw (removed the async
option in /etc/exports and ran exportfs -arv).
Remounted this NFS file system on a Fedora27 client and re-ran the test.
I have only waited 10mins but the overal network data rate is in the
order of 0.1 MBytes/sec so it looks like it will be a multiple hour job
as at home.
So I have two completely separate systems with the same performance over
NFS.
With your NFS "sync" test are you sure you set the "sync" mode on the
server and re-exported the file systems ?
--b.
What's the disk configuration and what filesystem is this?
Those tests above were to a single: SATA Western Digital Red 3TB, WDC
WD30EFRX-68EUZN0 using ext4.
Most of my tests have been to software RAID1 SATA disks, Western Digital Red
2TB on one server and Western Digital RE4 2TB WDC WD2003FYYS-02W0B1 on
another quad core Xeon server all using ext4 and all having plenty of RAM.
All on stock Fedora27 (both server and client) updated to date.
Is it really expected for NFS to be this bad these days with a reasonably
typical operation and are there no other tuning parameters that can help ?
It's expected that the performance of single-threaded file creates will
depend on latency, not bandwidth.
I believe high-performance servers use battery backed write caches with
storage behind them that can do lots of IOPS.
(One thing I've been curious about is whether you could get better
performance cheap on this kind of workload ext3/4 striped across a few
drives and an external journal on SSD. But when I experimented with
that a few years ago I found synchronous write latency wasn't much
better. I didn't investigate why not, maybe that's just the way SSDs
are.)
--b.
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx