Re: SSD and non-SSD Suitability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 05/29/10 09:31, Jiro SEKIBA wrote:
> Hi,
>
> At Fri, 28 May 2010 10:50:31 +0100,
> Gordan Bobic wrote:
>   
>> Jiro SEKIBA wrote:
>>
>>     
>>> I haven't got any particular quantitative data by my own,
>>> so I'll write somewhat subjective opinion.
>>>       
>> Thanks, I appreciate it. :)
>>
>>     
>>>> I've got a somewhat broad question on the suitability of nilfs for 
>>>> various workloads and different backing storage devices. From what I 
>>>> understand from the documentation available, the idea is to always write 
>>>> sequentially, and thus avoid slow random writes on old/naive SSDs. Hence 
>>>> I have a few questions.
>>>>
>>>> 1) Modern SSDs (e.g. Intel) do this logical/physical mapping internally, 
>>>> so that the writes happen sequentially anyway. Does nilfs demonstrably 
>>>> provide additional benefits on such modern SSDs with sensible firmware?
>>>>         
>>> In terms of writing performance, it may not have additional benefits I guess.
>>> However, it still have benefits with regard to continuous snapshots.
>>>       
>> How does this compare with btrfs snapshots? When you say continuous, 
>> what are the breakpoints between them?
>>     
> I don't know well about btrfs, but I guess you can create a "snapshot"
> of current filesystem.  You can not create yesterday's snapshot.
> While nilfs can do the trick :) to create snapshot of yesterday's
> filessytem state.
>
> Nilfs creates snapshots from checkpoints.  Checkpoints are created
> automatically almost each time filesystem changed (it depends how frequently
> system changed). If you leave the checkpoints as its are, garbage collector
> will collect those as free diskspace.  Until then, the checkpoints are
> reachable by making it as snapshot.
>
>
>   
>>>> 2) Mechanical disks suffer from slow random writes (or any random 
>>>> operation for that matter), too. Do the benefits of nilfs show in random 
>>>> write performance on mechanical disks?
>>>>         
>>> I think it may have benefits, for nilfs will write sequentially whatever
>>> data is located before writing it.  But still some tweaks might be required
>>> to speed up compared with ordinary filsystem like ext3.
>>>       
>> Can you quantify what those tweaks may be, and when they might become 
>> available/implemented?
>>     
> I might choose the wrong word, but what I meant is more hack is required
> to improve write performance.  Not just configuration matters :(.
>
>   
>>>> 3) How does this affect real-world read performance if nilfs is used on 
>>>> a mechanical disk? How much additional file fragmentation in absolute 
>>>> terms does nilfs cause?
>>>>         
>>> The data is scattered if you modified the file again and again,
>>> but it'll be almost sequential at the creation time.  So it will
>>> affect much if files are modified frequently.
>>>       
>> Right. So bad for certain tasks, such as databases.
>>     
> Indeed. maybe /var type of directories too.
>
>   
>>>> 4) As the data gets expired, and snapshots get deleted, this will 
>>>> inevitably lead to fragmentation, which will de-linearize writes as they 
>>>> have to go into whatever holes are available in the data. How does this 
>>>> affect nilfs write performance?
>>>>         
>>> For now, my understanding, nilfs garbage collector moves the live (in use)
>>> blocks to the end of logs, so holes are not created (it is correct?).
>>> However, it leads another issue that garbage collector process, which is
>>> nilfs_cleanerd, will consume the I/O.  This is major I/O performance
>>> bottle neck current implementation.
>>>       
>> Since this moves files, it sounds like this could be a major issue for 
>> flash media since it unnecessarily creates additional writes. Can this 
>> be suppressed?
>>     
> You can simply kill the nilfs_clearnerd after you mount the nilfs partition.
>   
If you use the latest nilfs_utils, killing nilfs_cleanerd is no longer
necessary. You can use mount -o nogc. This will not start
nilfs_cleanerd. Another possibility is to let nilfs_cleanerd start and
tweak min_free_segments and max_free_segments so that cleanerd will only
do cleaning if necessary.
> This case, of course, any garbage is reclaimed and finally end up with
> disk full, even size of files don't occupy the storage size.
>
> I don't have data for now, but it made about twice better write performance
> compared with "with garbage collector".
>
> thanks,
>
> regards,
>
>   
>>>> 5) How does the specific writing amount measure against other file 
>>>> systems (I'm specifically interested in comparisons vs. ext2). What I 
>>>> mean by specific writing amount is for writing, say, 100,000 random 
>>>> sized files, how many write operations and MBs (or sectors) of writes 
>>>> are required for the exact same operation being performed on nilfs and 
>>>> ext2 (e.g. as measured by vmstat -d).
>>>>         
>>> You can find public benchmark results at the following links.
>>> However those are a bit old and current results may differ.
>>>
>>> http://www.phoronix.com/scan.php?page=article&item=ext4_btrfs_nilfs2&num=1
>>> http://www.linux-mag.com/cache/7345/1.html
>>>       
>> Thanks.
>>
>> Gordan
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>>     
>
>
>
>   
Bye,
David Arendt
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux