Jiro SEKIBA wrote:
I haven't got any particular quantitative data by my own,
so I'll write somewhat subjective opinion.
Thanks, I appreciate it. :)
I've got a somewhat broad question on the suitability of nilfs for
various workloads and different backing storage devices. From what I
understand from the documentation available, the idea is to always write
sequentially, and thus avoid slow random writes on old/naive SSDs. Hence
I have a few questions.
1) Modern SSDs (e.g. Intel) do this logical/physical mapping internally,
so that the writes happen sequentially anyway. Does nilfs demonstrably
provide additional benefits on such modern SSDs with sensible firmware?
In terms of writing performance, it may not have additional benefits I guess.
However, it still have benefits with regard to continuous snapshots.
How does this compare with btrfs snapshots? When you say continuous,
what are the breakpoints between them?
2) Mechanical disks suffer from slow random writes (or any random
operation for that matter), too. Do the benefits of nilfs show in random
write performance on mechanical disks?
I think it may have benefits, for nilfs will write sequentially whatever
data is located before writing it. But still some tweaks might be required
to speed up compared with ordinary filsystem like ext3.
Can you quantify what those tweaks may be, and when they might become
available/implemented?
3) How does this affect real-world read performance if nilfs is used on
a mechanical disk? How much additional file fragmentation in absolute
terms does nilfs cause?
The data is scattered if you modified the file again and again,
but it'll be almost sequential at the creation time. So it will
affect much if files are modified frequently.
Right. So bad for certain tasks, such as databases.
4) As the data gets expired, and snapshots get deleted, this will
inevitably lead to fragmentation, which will de-linearize writes as they
have to go into whatever holes are available in the data. How does this
affect nilfs write performance?
For now, my understanding, nilfs garbage collector moves the live (in use)
blocks to the end of logs, so holes are not created (it is correct?).
However, it leads another issue that garbage collector process, which is
nilfs_cleanerd, will consume the I/O. This is major I/O performance
bottle neck current implementation.
Since this moves files, it sounds like this could be a major issue for
flash media since it unnecessarily creates additional writes. Can this
be suppressed?
5) How does the specific writing amount measure against other file
systems (I'm specifically interested in comparisons vs. ext2). What I
mean by specific writing amount is for writing, say, 100,000 random
sized files, how many write operations and MBs (or sectors) of writes
are required for the exact same operation being performed on nilfs and
ext2 (e.g. as measured by vmstat -d).
You can find public benchmark results at the following links.
However those are a bit old and current results may differ.
http://www.phoronix.com/scan.php?page=article&item=ext4_btrfs_nilfs2&num=1
http://www.linux-mag.com/cache/7345/1.html
Thanks.
Gordan
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html