Journaling won't save you from this problem, necessarily. What you
gain from journaling is when your process aborts half way through
because of any failure at all. The integrity you get is not that
every piece of data is intact and correct, but that no bit is in a
transitory state. This means any transaction, or change of state from
A to B, that affects more than one bit on the hard drive, is a
concrete, isolated and discrete change. This is the only guarantee
you get from a journaled file system.
I again point out this is a file that was written once and never again modified. Nothing should have been writing to it. The metadata got trashed somehow, as I remember it gave some "inode has wrong size" error, I told e2fsck to fix it and the file ended up 0 bytes. I don't see how this could be blamed on anything but the kernel or hardware, and this particular machine is fairly new, with decent quality hardware, so I'm pretty certain the hardware isn't flaky.
Though now that I think of it I've had problems with the r300 drivers locking up the system. Should a hardware lockup cause this kind of problem? If you ask me, no, not to a file that's not even open. Especially as I obsessively type "sync" before doing anything at all risky, but it seems I can't necessarily even trust sync to do its job. Go figure.
-- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list