On Tue, 5 Nov 2013, Figo.zhang wrote:
Of course, if you don't use Linux on the desktop you don't really care -
well, I do. Also
not everyone in this world has an UPS - which means such a huge buffer
can lead to a
serious data loss in case of a power blackout.
I don't have a desk (just a lap), but I use Linux on all my computers and
I've never really noticed the problem. Maybe I'm just very patient, or
maybe
I don't work with large data sets and slow devices.
However I don't think data-loss is really a related issue. Any process
that
cares about data safety *must* use fsync at appropriate places. This has
always been true.
=>May i ask question that, some like ext4 filesystem, if some app motify
the files, it create some dirty data. if some meta-data writing to the
journal disk when a power backout,
it will be lose some serious data and the the file will damage?
with any filesystem and any OS, if you create dirty data but do not f*sync() the
data, there isa possibility that the system can go down between the time the
application creates the dirty data and the time the OS actually gets it on disk.
If the system goes down in this timeframe, the data will be lost and it may
corrupt the file if only some of the data got written.
David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html