On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
On Wed, Jan 03, 2007 at 01:09:41PM -0800, Bryan Henderson wrote:
On any decent filesystem st_ino should uniquely identify an object and
reliably provide hardlink information. The UNIX world has relied upon
this
for decades. A filesystem with st_ino collisions without being hardlinked
(or the other way around) needs a fix.
But for at least the last of those decades, filesystems that could not do
that were not uncommon. They had to present 32 bit inode numbers and
either allowed more than 4G files or just didn't have the means of
assigning inode numbers with the proper uniqueness to files. And the sky
did not fall. I don't have an explanation why,
I think it's mostly high end use and high end users tend to understand
more. But we're going to see more really large filesystems in "normal"
use so..
Currently, large file support is already necessary to handle dvd and
video. It's also useful for images for virtualization. So the failing stat()
calls should already be a thing of the past with modern distributions.
As long as glibc compiles by default with 32-bit ino_t, the problem exists
and is severe --- programs handling large files, such as coreutils, tar,
mc, mplayer, already compile with 64-bit ino_t and off_t, but the user (or
script) may type something like:
cat >file.c <<EOF
#include <sys/types.h>
#include <sys/stat.h>
main()
{
int h;
struct stat st;
if ((h = creat("foo", 0600)) < 0) perror("creat"), exit(1);
if (fstat(h, &st)) perror("stat"), exit(1);
close(h);
return 0;
}
EOF
gcc file.c; ./a.out
--- and you certainly do not want this to fail (unless you are out of disk
space).
The difference is, that with 32-bit program and 64-bit off_t, you get
deterministic failure on large files, with 32-bit program and 64-bit
ino_t, you get random failures.
Mikulas
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html