Re: write is faster whan seek?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 11 2008, Dmitri Monakhov wrote:
> Jens Axboe <jens.axboe@xxxxxxxxxx> writes:
> 
> > On Wed, Jun 11 2008, Dmitri Monakhov wrote:
> >> Jens Axboe <jens.axboe@xxxxxxxxxx> writes:
> >> 
> >> > On Wed, Jun 11 2008, Dmitri Monakhov wrote:
> >> >> I've found what any non continious sequence violation  result in significant
> >> >> pefrormance drawback. I've two types of requests:
> >> >> 1)Ideally sequential  writes:
> >> >>    for(i=0;i<num;i++) {
> >> >>        write(fd, chunk, page_size*32);
> >> >>    } 
> >> >>    fsync(fd);
> >> >> 
> >> >> 2) Sequential writes with dgap for each 32'th page
> >> >>    for(i=0;i<num;i++) {
> >> >>        write(fd, chunk, page_size*31);
> >> >>        lseek(fd, page_size, SEEK_CUR);
> >> >>    }
> >> >>    fsync(fd);
> >> >> 
> >> >> I've found what second IO pattern is about twice times slower whan the
> >> >> first one regardless to ioscheduler or HW disk. It is not clear to me
> >> >> why this happen. Is it linux speciffic or general hardware behaviour
> >> >> speciffic.  I've naively expected what disk hardware cat merge several
> >> >> 31-paged requests in to continious one by filling holes by some sort
> >> >> of dummy activity.
> >> >
> >> > Performance should be about the same. The first is always going to be a
> >> > little faster, on some hardware probably quite a bit. Are you using
> >> > write back caching on the drive? I ran a quick test here, and the second
> >> > test is about ~5% slower on this drive.
> >> Hmmm... it is definitly not happen in may case.  
> >> I've tested following sata drive w and w/o write cache
> >> AHCI
> >> ata7.00: ATA-7: ST3250410AS, 3.AAC, max UDMA/133
> >> ata7.00: 488397168 sectors, multi 0: LBA48 NCQ (depth 31/32)
> >> In all cases it is about two times slower.
> >> 
> >> # time   /tmp/wr_test /dev/sda3 32 0 800
> >> real    0m1.183s
> >> user    0m0.002s
> >> sys     0m0.079s
> >> 
> >> # time   /tmp/wr_test /dev/sda3 31 1 800
> >> real    0m3.240s
> >> user    0m0.000s
> >> sys     0m0.078s
> >
> > Ahh, direct to device. Try making a 4kb fs on sda3, mount it, umount
> > it, then rerun the test.
> Noup.. no changes at all.
> # cat /sys/block/sda/queue/scheduler
> [noop] anticipatory deadline cfq
> # blockdev --getbsz  /dev/sda
> 4096
> # hdparm -W 0 /dev/sda
> 
> /dev/sda:
>  setting drive write-caching to 0 (off)
> 
> # time ./wr_test /dev/sda  32 0 800
> real    0m1.185s
> user    0m0.000s
> sys     0m0.106s
> 
> # time ./wr_test /dev/sda  31 1 800
> real    0m3.391s
> user    0m0.002s
> sys     0m0.112s
> 
> 
> I'll try to play with request queue parameters

Try finding out if the issued IO pattern is OK first, you should
understand the problem before you attempt to fix it. Run blktrace on
/dev/sda when you do the two tests and compare the blkparse output!

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux