Re: sequential versus random I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/29/2014 11:23 AM, Matt Garman wrote:
...
> In particular, we have a big NFS server that houses a collection of
> large files (average ~400 MB).  The server is read-only mounted by
> dozens of compute nodes.  Each compute node in turn runs dozens of
> processes that continually re-read those big files.  Generally
> speaking, should the NFS server (including RAID subsystem) be tuned
> for sequential I/O or random I/O?
...


If your workflow description is accurate, and assuming you're trying to
fix a bottleneck at the NFS server, the solution to this is simple, and
very well known:  local scratch space.  Given your workflow description
it's odd that you're not already doing so.  Which leads me to believe
that the description isn't entirely accurate.  If it is, you simply copy
each file to local scratch disk and iterate over it locally.  If you're
using diskless compute nodes then that's an architectural
flaw/oversight, as this workload as described begs for scratch disk.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux