Hi John
I hope it's not too late to try giving my thoughts about it...
Would greater performance be seen by reading from a block device in
its native block size, or by increasing the block size to PAGE_SIZE?
Or is there even any difference in performance between logical block
sizes at all? (And we are still supposed to use buffer_head, right? I
can't see anything other than drivers using bio, and filesystems are
still using sb_bread and such.)
I could only think this way right now: let's assume disk sector size is
512 byte, while block size is 4KB. thus in every I/O operation, you read
4KB = 8 sector. Is that good? maybe yes maybe no. However, nowadays
there is a big chance you might read adjacent sector during a sequential
read, thus can save some time if you read all those 8 sectors in one go.
Kernel will cache them all and your process can find them faster when
you need them.
But please bear in mind that this heavily depends on your read pattern.
The similar case actually happens in read-ahead. You might gain speedup,
or you might just make the page cache filled faster with unneeded blocks
with very low cache hit ratio.
And is there any way to increase the kernel-internal sector size? The
ext4 development site says the maximum filesystem size addressable
should be (filesystem-block-size * 2^64) but LDD3 says the maximum
device size a driver can report is (512 * 2^64-1). (That should really
be max sector number, not max size, because capacity 0 is useless and
would allow 2^64 sectors to be addressed fully. As a side note.)
I don't understand, what do you mean by capacity 0? And so far, I think
both description of fs max size and device max size are correct.
regards,
Mulyadi
--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ