RE: [PATCH RFC v2] Performing direct I/O on sector-aligned requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message -----
> From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
> To: "Alexandre Depoutovitch" <adepoutovitch@xxxxxxxxxx>
> Sent: Friday, April 27, 2012 4:51:20 PM
> Subject: Re: About Direct I/O
>
> On Fri, Apr 27, 2012 at 01:22:46PM -0700, Alexandre Depoutovitch
> wrote:

> >
> > The tests have been done on a hardware RAID10 array with 8 10K 450GB
> > SAS drives. Raid adapter was HP P410i.
>
> It might be worth also testing with a single drive if you want to see
> the worst case for synchronous writes.  (That adapater may have a
> battery-backed cache that lets it respond to writes immediately?)

Yes, the adapter has battery backed cache (1GB), and you are right, it is 
the main reason for significant improvement when doing direct I/O. Sync 
random writes happen order of magnitude faster than reads. I also tested 
Direct I/O on a cheap Western Digital 7.2K SATA drive (WD10EALX) on an Intel 
82801 SATA controller. There was no performance gain with direct I/O because 
write speed was in fact 1.5 slower than read speed. However, there was no 
performance degradation either, whether direct of buffered I/O was used (in 
sync mode).
So looks like that Direct I/O for NFS is beneficial for random, f/s 
unaligned,  synchronous writes on adapters with NVRAM. In other cases it can 
be turned on/off either automatically, based on alignment and O_SYNC flag, 
or manually, based on hardware characteristics.

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux