nfs mount in error, wrong filesystem size shown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've got a couple of servers with several mirrored gluster volumes. Two work
fine from all perspectives. One, the most recently set up, mounts remotely
as glusterfs, but fails badly as nfs. The mount appears to work when
requested, but the filesystem size shown in totally wrong and it is not in
fact accessible. This is with 3.1.4:

So we have for instance on one external system:

192.168.1.242:/std   309637120 138276672 155631808  48% /mnt/std
192.168.1.242:/store
                      19380692   2860644  15543300  16% /mnt/store

where the first nfs mount is correct and working, but the second is way off.
That was the same result as when /store was nfs mounted to another system
too. But on that same other system, /store mounts correctly as glusterfs:

vm2:/store        536704000  14459648 494981376   3% /mnt/store

with the real size shown, and the filesystem fully accessible.

The erroneous mount is also apparently dangerous. I tried writing a file to
it to see what would happen, and it garbaged the underlying filesystems. So
I did a full reformatting and recreation of the gluster volume before
retrying at that point - and still got the bad nfs mount for it.

The bad nfs mount happens no matter which of the two servers in the gluster
cluster the mount uses, too.

Any ideas what I'm hitting here? For the present purpose, we need to be able
to mount nfs, as we need some Macs to mount it.

Thanks,
Whit


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux